Anarchists Not In Universities

By design and legacy, universities as formal social institutions easily end up closely conforming to, actively supporting, and strongly defending sociopolitical systems of power and authority, socioeconomic orders of hierarchy and inequality. In how higher education is typically structured and operates, degrees and tenure plays a gatekeeping role for the professional-managerial class and a bulwark against any challenges to the ruling elite. It filters out the non-conformists, iconoclasts, radicals, rabblerousers, and troublemakers. For those who don’t get the message, they might be kicked out or fired, silenced or blackballed.

Right-wingers have this bizarre fantasy of universities as bastions of left-wing politics. That is as far from the truth as one can get. Few universities have ever welcomed radicals, much less sought to promote activism. The only reason that campuses have been a site of political action is because they are a prime location of institutionalized power. It’s the same reason people protest on Wall Street and in front of the White House. The only way to directly challenge power is to meet it where it resides. And for college students, the power that most affects their lives and is closest within reach is university bureaucracy, which these days is typically run according to a profit model of business management and not Marxist working class control, communist revolt, or democratic self-governance.

There is a reason why, in the Cold War, the CIA hired professors as spymasters and recruited students as agents; and surely the CIA still operates this way (it’s the same reason why enemy states try to infiltrate each other’s universities, just as they do with each other’s governments). Universities have often been in that key middle position between state and citizenry, sometimes making them a useful tool of propaganda as American Studies served during the Cold War. And rarely have university staff, including tenured professors, dared to challenge this power structure. After all, if they were the type to do so, they wouldn’t likely lasted long enough to get a secure position within the hierarchy. Professors in most universities, at least in a country like the United States, quickly learn to keep their heads down. The same has been true in other countries drawn to authoritarianism, as Milton Mayer explained about how the Nazis slowly changed German society, step by step:

Suddenly it all comes down, all at once. You see what you are, what you have done, or, more accurately, what you haven’t done (for that was all that was required of most of us: that we do nothing). You remember those early meetings of your department in the university when, if one had stood, others would have stood, perhaps, but no one stood. A small matter, a matter of hiring this man or that, and you hired this one rather than that. You remember everything now, and your heart breaks. Too late. You are compromised beyond repair.

That is a good transition to what inspired this post. David Graeber is one of the more well known anarchists, at least in the English-speaking world. That is saying something considering how effectively mainstream media and politics excludes anarchists from public awareness and public debate. It is also the higher education system that excludes them, often a matter of them not being hired or getting tenure as was the case with Graeber. Minorities are probably more well represented than anarchists in positions of power and authority. Partly, that is because anarchists aren’t prone to seek positions of power and authority in the first place. But even when an anarchist tries to work within the system, most wouldn’t be very happy or likely last long. Graeber’s experience demonstrates this for not only was he an anarchist but also came from a lowly and disreputable background, from a family of working class and radicalism. Apparently, that makes him precisely what every American university wants to avoid like the plague.

Noam Chomsky, on the other hand, did have a successful career as an anarchist and academic but he did so by entirely separating the two and by compromising his principles in working on Pentagon-funded programs. I have a feeling that Graeber wouldn’t be willing to follow Chomsky’s example.

One has to be willing to admit how much Chomsky compromised, more than some are willing to do, as compromise over times becomes a mindset and a habit. The compromise is political, intellectual, and psychological. This can be seen in the positions Chomsky has taken, which don’t make sense from a position of principled anarchism, but it also can be seen in how on multiple occasions has acted as a sheepdog for the Democratic Party in telling people to vote for neocons and neoliberals because they are supposedly a lesser evil. Is Hillary Clinton a lesser evil in the way Chomsky’s friend John Deutch, academic turned Deputy Defense Secretary and later Director of the CIA, was supposedly a lesser evil according to Chomsky’s own rationalization? If they are genuinely lesser evil, why are they such key political actors in promoting greater and greater evil over time?

Chris Knight writes (When Chomsky Worked on Weapons Systems for the Pentagon):

Naturally, having argued that people like Rostow and Faurisson should be able to work in academia, Chomsky was in no position to be too hostile to any of his colleagues at MIT, no matter what they were up to. In the 1980s, for example, MIT’s most notorious academic was its Provost, John Deutch, who was particularly controversial due to his role in bringing biological warfare research to the university.[31] Deutch was also heavily involved in the Pentagon’s chemical weapons strategy, its deployment of MX nuclear missiles and its Nuclear Posture Review of 1994.[32] By this point, student and faculty opposition meant that Deutch had failed in one of his ambitions – to become President at MIT – but he had succeeded in becoming Deputy Defense Secretary. Then, in 1995, President Clinton made him Director of the CIA.

It was around this time that Chomsky was asked about his relationship with Deutch. He replied:

“We were actually friends and got along fine, although we disagreed on about as many things as two human beings can disagree about. I liked him. … I had no problem with him. I was one of the very few people on the faculty, I’m told, who was supporting his candidacy for the President of MIT.”[33]

In another interview, Chomsky was even more positive about his friend, remarking that Deutch “has more honesty and integrity than anyone I’ve ever met in academic life, or any other life. … If somebody’s got to be running the CIA, I’m glad it’s him.”[34]

One of Chomsky’s most controversial political positions concerned Pol Pot’s regime in Cambodia. Although he never denied that the regime committed atrocities, it is hard to read his early writings on this subject without getting the impression that he is understating what was going on in Cambodia under Pol Pot.[35] Chomsky’s right-wing detractors have implied that this was because he had some ideological sympathy with the Pol Pot regime. This was clearly not the case. A better explanation is that it pained Chomsky’s conscience to be too critical of any country that had been so brutally targeted by the Pentagon, i.e. by the same people who had so generously funded his own academic career.

If Chomsky didn’t tell you he was an anarchist, how would one know from his academic career? Well, you couldn’t. He has always argued that ideas are separate from politics, that academia is separate from the personal. No one who is even slightly psychologically self-aware and knowledgeable of the social sciences could make such an argument, but then again Chomsky conveniently dismisses social science out of hand. You can dissociate parts of your life and self, but they never actually exist separately. If anarchism doesn’t inform how you live every aspect of your life, what purpose does it serve in being sectioned off to where it doesn’t personally threaten your lifestyle? If Chomsky isn’t an anarchist in practice when it matters most such as when money and career is on the line, is he really an anarchist? He would rather not think about that because his entire career has depended on never answering that question or rather never acknowledging the default answer.

That isn’t to say that his political work is of no value, but one has to be honest in admitting how much he chose to sacrifice, especially considering how his anarchism so often brings him back to the DNC establishment. So, that compromise wasn’t limited to a brief period of academic work long ago for it has left a permanent mark on his life and politics with repercussions in the decades since. Graeber took a different path. He still ended up in academia, just not in the United States. There was nothing stopping Chomsky from working at a different university where he wouldn’t have compromised and been compromised. It would have been a sacrifice, but in the long term it might have been a much smaller sacrifice with greater gains. I guess we will never know.

Interestingly, Graeber’s troubles began at Yale, which like MIT is one of the last places in the world an anarchist would feel at home. It was at Yale that Norman Holmes Pearson was a student and who later, as a professor, acted as a World War II secret agent for the OSS (Office of Strategic Services), precursor of the CIA. Pearson was one of the major figures who established American Studies at Yale. He also went onto teach and train James Jesus Angleton who for 21 years became the CIA chief of counter-intelligence, one of the most respected and feared agents in the non-communist world. John Hartley said of him that, “His obsessive search for spies turned to domestic suspects during the Johnson and Nixon presidencies, among them the liberal and countercultural elite of American society, including Martin Luther King and Edward Kennedy.” Angleton wielded much power and, along with catching actual spies, destroyed the careers and lives of many innocent people. Under the Johnson and Nixon administrations, he was in charge of CIA domestic spying for Operation Chaos. That is what higher education in the United States is mixed up with.

Is it surprising that an anti-authoritarian activist would have a hard time getting tenure at Yale? Not really. So much for universities being a haven for left-wingers and hotbed of radicalism. This would also explain, as I’ve noticed, the scarcity of academic research on anarchism (not even an anarchist like Chomsky who gets into academia will dare to apply his anarchism to his academic work, much less make it a focus; or else he wouldn’t have had a long academic career). Meanwhile, there are many millions of pages of academic research obsessing over authoritarianism. Maybe there is a reason authoritarians find universities, especially the Ivy League colleges, to be a convenient place to promote their careers. There are more academics who will write and teach about authoritarianism than will actually stand up to abuses of power in the real world. This makes one wonder what is the real purpose for studying authoritarianism in an academic setting — to prevent it or promote it?

* * *

Unraveling the Politics of Silencing
by Laura Nader

A young David Graeber came from a blue collar family. His mother was a union organizer for New York garment workers and his father fought in the Spanish Civil War. Graeber went to the University of Chicago for graduate work. He carried out his first major fieldwork in Madagascar. After Chicago, he was an assistant professor of anthropology at Yale, from 1998- 2007, author of Toward an Anthropological Theory of Value (2001) and Fragments of an Anarchist Anthropology in 2004. Although he was prolific and a clear writer, his contract was not renewed at Yale. He had during his Yale stay been doing fieldwork on anarchism in New York, participant observing, and eventually became one of the founders of the Occupy Wall Street Movement (Graeber 2013). He describes himself as a scholar in New Haven, an activist in New York. But after Yale, Graeber has not been able to get a job in the United States.

The Sounds of Anthropological Silence
by David Price

David Graeber’s work is exceptional. He is a rare scholar who is able to grapple with complex social theory in a very straightforward way, but it seems that it was his decision to not let theory simply be theory that lead to his leaving Yale. I am sure that had Professor Graeber been satisfied with only writing books and articles for other academics on the problems of pay inequities and globalization he could today be sipping a dry martini within the secure confines of the Yale Faculty Club. But moving beyond theory to action is seldom welcomed on university campuses when one is studying inequality.

I think that self-proclaimed anarchists can fit into an establishment university, so long as their anarchism is limited to the written and spoken word–universities can and do welcome people espousing all sorts of beliefs; it is just when professors and students behaviorally challenge power structures either off or on campus that trouble begins. It would seem that Professor Graeber’s activism both on and off campus is what put the kybosh on his tenure application. Another way of looking at this is to say that activism matters–matters so much in fact that those who engage in it must be marginalized.

It Wasn’t a Tenure Case – A Personal Testimony, with Reflections
by David Graeber

There are many mysteries of the academy which would be appropriate objects of ethnographic analysis. One question that never ceases to intrigue me is tenure. How could a system ostensibly designed to give scholars the security to be able to say dangerous things have been transformed into a system so harrowing and psychologically destructive that, by the time scholars find themselves in a secure position, 99% of them have forgotten what it would even mean to have a dangerous idea? How is the magic effected, systematically, on the most intelligent and creative people our societies produce? Shouldn’t they of all people know better? There is a reason the works of Michel Foucault are so popular in US academia. We largely do this to ourselves. But for this very reason such questions will never be researched. […]

It is difficult to exaggerate the importance of social class. I was told by one ally at Yale that my problem was that owing to my proletarian background and general comportment, I was considered “unclubbable.” That is, if one is not from a professional-managerial background, one can be accepted by one’s “betters,” but only if one makes it clear such acceptance is one’s highest life aspiration. Otherwise, ideas or actions that among the well-born would likely be treated as amusing peccadillos—such as an embrace of anti-authoritarian politics—will be considered to disqualify one from academic life entirely. […]

The (tacitly authoritarian) insistence on acting as if institutions could not possibly behave the way the anthropology department at Yale did in fact behave leads almost necessary to victim-blaming. As a result, bullying—which I have elsewhere defined as unprovoked attacks designed to produce a reaction which can be held out as retrospective justification for the attacks themselves—tends to be an effective strategy in academic contexts. Once my contract was not renewed, I was made aware that within the larger academic community, any objections I made to how I’d been treated would be themselves be held out as retroactive justification for the non-renewal of my contract. If I was accused of being a bad teacher or scholar, and I objected that my classes were popular and my work well regarded, this would show I was self-important, and hence a bad colleague, which would then be considered the likely real reason for my dismissal. If I suggested political or even personal bias on the part of any of those who opposed renewal of my contract, I would be seen as paranoid, and therefore as likely having been let go for that very reason… And so on.

Are Wrens Smarter Than Racists?

Race realists and racial supremacists have many odd notions. For one, they believe humans are separate species, despite all the evidence to the contrary (e.g., unusually low genetic diversity as compared similar species; two random humans are more likely to be genetically similar than two random chimpanzees).

But an even stranger belief is that humans, despite being such a highly social species, are assumed to be incapable of cooperating with other humans who are perceived as different based on modern social constructions of ‘race’. Yet, even ignoring the fact that all humans are of the same species, numerous other species cooperate all the time across large genetic divides. This includes the development of close relationships between individuals of separate species.

So, why do racists believe that ‘white’ Americans and ‘black’ Americans must be treated as separate species and be inevitably segregated in different communities and countries? That particularly doesn’t make sense considering most so-called African-Americans are significantly of European ancestry, not to mention a surprising number of supposed European-Americans in the South that have non-European genetics (African, Native American, etc).

Wrens don’t let racism get in the way of promoting their own survival through befriending other species who share their territory. Do human racists think they have less cognitive capacity than wrens? If that is their honest assessment of their own abilities, that is fine. But why do they assume everyone else is as deficient as they are?

* * *

Birds from different species recognize each other and cooperate
by Matt Wood, University of Chicago

 

Cooperation among different species of birds is common. Some birds build their nests near those of larger, more aggressive species to deter predators, and flocks of mixed species forage for food and defend territories together in alliances that can last for years. In most cases, though, these partnerships are not between specific individuals of the other species—any bird from the other species will do.

But in a new study published in the journal Behavioral Ecology, scientists from the University of Chicago and University of Nebraska show how two different species of Australian fairy-wrens not only recognize individual birds from other species, but also form long-term partnerships that help them forage and defend their shared space as a group.

“Finding that these two species associate was not surprising, as mixed species flocks of birds are observed all over the world,” said Allison Johnson, PhD, a postdoctoral scholar at the University of Nebraska who conducted the study as part of her dissertation research at UChicago. “But when we realized they were sharing territories with specific individuals and responding aggressively only to unknown individuals, we knew this was really unique. It completely changed our research and we knew we had to investigate it.”

Variegated fairy-wrens and splendid fairy-wrens are two small songbirds that live in Australia. The males of each species have striking, bright blue feathers that make them popular with bird watchers. Their behavior also makes them an appealing subject for biologists. Both species feed on insects, live in large family groups, and breed during the same time of year. They are also non-migratory, meaning they live in one area for their entire lives, occupying the same eucalyptus scrublands that provide plenty of bushes and trees for cover.

When these territories overlap, the two species interact with each other. They forage together, travel together, and seem to be aware of what the other species is doing. They also help each other defend their territory from rivals. Variegated fairy-wrens will defend their shared territory from both variegated and splendid outsiders; splendid fairy-wrens will do the same, while fending off unfamiliar birds from both species.

“Splendid and variegated fairy-wrens are so similar in their habitat preferences and behavior, we would expect them to act as competitors. Instead, we’ve found stable, positive relationships between individuals of the two species,” said Christina Masco, PhD, a graduate student at UChicago and a co-author on the new paper.

The Coming Collapse

“I seriously believe this country deserves everything that’s going to happen to it. War, revolution, madness, the whole bag.”
~ Hunter S. Thompson, 1968

Authoritarian strains in American politics and economy have a long history. Major American figures, including President Jimmy Carter, have warned that the United States is now a banana republic. To put emphasis on the nonpartisan nature of this judgment, it was during the last Democratic administration that Carter stated in no uncertain terms that, “America does not at the moment have a functioning democracy.” That was before anyone knew of Donald Trump running for the presidency.

Plenty of data supports this assessment, such as American democracy recently being downgraded from a “full democracy” to a “flawed democracy”, according to the Democracy Index of the Economist Intelligence Unit (EIU). But even long before that, others had noted that in the past these rising rates of high inequality were always a key indicator that a country had become a banana republic.

One person noted that this was what they were taught in public school a half century ago back when the United States had low inequality, a large middle class, and high economic mobility. It was taught as a way of comparing how the American economy was superior in the egalitarian American Dream it claimed to represent. Few at the time thought the United States would follow the example of the authoritarian regimes and puppet states our government put into power. It should have been known, though, that what a citizenry allows their government do to others will eventually and inevitably be done to the citizenry, first targeting the disadvantaged but eventually targeting everyone (as Martin Niemöller famously explained about those in power finally coming for him).

It’s very much been a bipartisan failure of democracy or, if you prefer, a bipartisan success of cynical realpolitik. The ruling elite are doing great, for the moment. And the general public have been kept ignorant and distracted, resulting in a lack of urgency. But all of that is irrelevant, as far as it matters for the disempowered and disfranchised majority — that is irrelevant until the inevitable collapse or revolt. It is with such a dire threat looming on the horizon that the ruling elite further push disinformation and spectacle.

Most Americans have no idea how bad it is has gotten. For example, Americans think economic inequality is far lower than it actually is and yet the actual levels are far higher than what most Americans believe should be tolerable. As soon as Americans realize they’ve been lied to by their corporatocratic government and corporatist media, that will be the end of the charade and the ending will come quickly once it starts. On an intuitive level, Americans already grasp it doesn’t all add up. And this can be seen in every aspect of our society.

Propaganda, no matter how successful, can only deny reality for so long. High inequality, in creating rampant stress and anxiety, is a problem that solves itself by destabilizing the entire system. There is no example in history of a high inequality society that didn’t either destroy itself or else became more authoritarian in delaying collapse, but in either case it involves a sharp increase of public unrest and revolt along with violence and death. The Bernie Sanders’ campaign was a last ditch effort to avoid this fate. But now we are too far gone. We will be forced to ride this out to its bitter end.

To be clear, Sanders isn’t a socialist nor are most of his supporters. Sanders is a rather moderate social democrat who is to the right of the old school New Deal Democrats, not coming close to the large-scale reforms and high tax rates supported by presidents in both parties earlier last century. It isn’t only populists threatening the powerful. Even among the powerful, there are those who don’t see the situation as sustainable.

Nick Hanauer, a wealthy businessman and early investor in Amazon, has warned about the pitchforks coming for the plutocrats. He makes this warning because, as with Adam Smith, he knows inequality is bad for any hope of a free society and free economy. And Hanauer is talking not only to Trump-like Republicans but also to major Democratic political operators such as Amazon’s Jeff Bezos. “They’re super exploitive—just unacceptable,” Hanauer says. “What I can guarantee you is that Jeff Bezos is not going to change those things in the absence of somebody putting essentially a gun to his head and forcing him to do it.”

That is one plutocratic Democrat talking about another plutocratic Democrat. If there is that much harsh disagreement among the ruling elite in the once considered working class party, imagine the outrage from below that is coming when civil unrest boils over. Instead of listening to the likes of Hanauer and Sanders (and earlier Nader), we got the corruption of the Clinton Democrats whose power-mongering created the monster in power now, Donald Trump. Don’t forget that Trump who, before becoming a Republican president, was a Clinton Democrat and close friend and supporter of the Clinton family. All of these people represent the splintering of the Democratic Party, splintering along the lines of populism and plutocracy.

As a Silicon Valley pastor of a church in one of the wealthiest neighborhoods in the country, Gregory Stevens bluntly spoke of the wealthy elites of the liberal class who, I might add, heavily fund the Democratic Party. The tech industry has become the core support behind the Democratic neoliberalism of the Clinton Democrats, endlessly espousing empty rhetoric about social justice but never willing to confront the actual problems that they profit from. Here is what Stevens said, according to Sam Levin in a Guardian piece:

“I believe Palo Alto is a ghetto of wealth, power, and elitist liberalism by proxy, meaning that many community members claim to want to fight for social justice issues, but that desire doesn’t translate into action,” Stevens wrote, lamenting that it was impossible for low-income people to live in the city. “The insane wealth inequality and the ignorance toward actual social justice is absolutely terrifying.”

He later added: “The tech industry is motivated by endless profit, elite status, rampant greed, and the myth that their technologies are somehow always improving the world.”

Remind me again how societal decline is all the fault of Donald Trump and his co-conspirators. The only thing that makes Trump different from the Clintons is that he is more honest about his motivations.

Below are two views. The first was recently written under our present Trump administration. And the second was written under the Obama administration. The problems described have been continuously worsening under administrations from both parties going back decades. These kinds of warnings go even further back, as expressed in the predictions by the American Revolutionaries and Founders known as the Anti-Federalists. It’s long been known what kind of society this is and, unless it was changed, what kind of society it would become.

* * *

The Coming Collapse
by Chris Hedges

The Trump administration did not rise, prima facie, like Venus on a half shell from the sea. Donald Trump is the result of a long process of political, cultural and social decay. He is a product of our failed democracy. The longer we perpetuate the fiction that we live in a functioning democracy, that Trump and the political mutations around him are somehow an aberrant deviation that can be vanquished in the next election, the more we will hurtle toward tyranny. The problem is not Trump. It is a political system, dominated by corporate power and the mandarins of the two major political parties, in which we don’t count. We will wrest back political control by dismantling the corporate state, and this means massive and sustained civil disobedience, like that demonstrated by teachers around the country this year. If we do not stand up we will enter a new dark age.

The Democratic Party, which helped build our system of inverted totalitarianism, is once again held up by many on the left as the savior. Yet the party steadfastly refuses to address the social inequality that led to the election of Trump and the insurgency by Bernie Sanders. It is deaf, dumb and blind to the very real economic suffering that plagues over half the country. It will not fight to pay workers a living wage. It will not defy the pharmaceutical and insurance industries to provide Medicare for all. It will not curb the voracious appetite of the military that is disemboweling the country and promoting the prosecution of futile and costly foreign wars. It will not restore our lost civil liberties, including the right to privacy, freedom from government surveillance, and due process. It will not get corporate and dark money out of politics. It will not demilitarize our police and reform a prison system that has 25 percent of the world’s prisoners although the United States has only 5 percent of the world’s population. It plays to the margins, especially in election seasons, refusing to address substantive political and social problems and instead focusing on narrow cultural issues like gay rights, abortion and gun control in our peculiar species of anti-politics.

This is a doomed tactic, but one that is understandable. The leadership of the party, the Clintons, Nancy Pelosi, Chuck Schumer, Tom Perez, are creations of corporate America. In an open and democratic political process, one not dominated by party elites and corporate money, these people would not hold political power. They know this. They would rather implode the entire system than give up their positions of privilege. And that, I fear, is what will happen. The idea that the Democratic Party is in any way a bulwark against despotism defies the last three decades of its political activity. It is the guarantor of despotism. […]

But the warnings from the architects of our failed democracy against creeping fascism, Madeleine Albright among them, are risible. They show how disconnected the elites have become from the zeitgeist. None of these elites have credibility. They built the edifice of lies, deceit and corporate pillage that made Trump possible. And the more Trump demeans these elites, and the more they cry out like Cassandras, the more he salvages his disastrous presidency and enables the kleptocrats pillaging the country as it swiftly disintegrates.

The press is one of the principal pillars of Trump’s despotism. It chatters endlessly like 18th-century courtiers at the court of Versailles about the foibles of the monarch while the peasants lack bread. It drones on and on and on about empty topics such as Russian meddling and a payoff to a porn actress that have nothing to do with the daily hell that, for many, defines life in America. It refuses to critique or investigate the abuses by corporate power, which has destroyed our democracy and economy and orchestrated the largest transfer of wealth upward in American history. The corporate press is a decayed relic that, in exchange for money and access, committed cultural suicide. And when Trump attacks it over “fake news,” he expresses, once again, the deep hatred of all those the press ignores. The press worships the idol of Mammon as slavishly as Trump does. It loves the reality-show presidency. The press, especially the cable news shows, keeps the lights on and the cameras rolling so viewers will be glued to a 21st-century version of “The Cabinet of Dr. Caligari.” It is good for ratings. It is good for profits. But it accelerates the decline. […]

As a foreign correspondent I covered collapsed societies, including the former Yugoslavia. It is impossible for any doomed population to grasp how fragile the decayed financial, social and political system is on the eve of implosion. All the harbingers of collapse are visible: crumbling infrastructure; chronic underemployment and unemployment; the indiscriminate use of lethal force by police; political paralysis and stagnation; an economy built on the scaffolding of debt; nihilistic mass shootings in schools, universities, workplaces, malls, concert venues and movie theaters; opioid overdoses that kill some 64,000 people a year; an epidemic of suicides; unsustainable military expansion; gambling as a desperate tool of economic development and government revenue; the capture of power by a tiny, corrupt clique; censorship; the physical diminishing of public institutions ranging from schools and libraries to courts and medical facilities; the incessant bombardment by electronic hallucinations to divert us from the depressing sight that has become America and keep us trapped in illusions. We suffer the usual pathologies of impending death. I would be happy to be wrong. But I have seen this before. I know the warning signs. All I can say is get ready.

The Divide
by Matt Taibbi
Kindle Locations 67-160
(see more here)

The other thing here is an idea that being that poor means you should naturally give up any ideas you might have about privacy or dignity. The welfare applicant is less of a person for being financially dependent (and a generally unwelcome immigrant from a poor country to boot), so she naturally has fewer rights.

No matter how offensive the image is, it has a weird logic that’s irresistible to many if not most Americans. Even if we don’t agree with it, we all get it.

And that’s the interesting part, the part where we all get it. More and more often, we all make silent calculations about who is entitled to what rights, and who is not. It’s not as simple as saying everyone is the same under the law anymore. We all know there’s another layer to it now.

As a very young man, I studied the Russian language in Leningrad, in the waning days of the Soviet empire. One of the first things I noticed about that dysfunctional wreck of a lunatic country was that it had two sets of laws, one written and one unwritten. The written laws were meaningless, unless you violated one of the unwritten laws, at which point they became all-important.

So, for instance, possessing dollars or any kind of hard currency was technically forbidden, yet I never met a Soviet citizen who didn’t have them. The state just happened to be very selective about enforcing its anticommerce laws. So the teenage farsovshik (black market trader) who sold rabbit hats in exchange for blue jeans outside my dorm could be arrested for having three dollars in his pocket, but a city official could openly walk down Nevsky Avenue with a brand-new Savile Row suit on his back, and nothing would happen.

Everyone understood this hypocrisy implicitly, almost at a cellular level, far beneath thought. For a Russian in Soviet times, navigating every moment of citizenship involved countless silent calculations of this type. But the instant people were permitted to think about all this and question the unwritten rules out loud, it was like the whole country woke up from a dream , and the system fell apart in a matter of months . That happened before my eyes in 1990 and 1991, and I never forgot it.

Now I feel like I’m living that process in reverse, watching my own country fall into a delusion in the same way the Soviets once woke up from one. People are beginning to become disturbingly comfortable with a kind of official hypocrisy. Bizarrely, for instance, we’ve become numb to the idea that rights aren’t absolute but are enjoyed on a kind of sliding scale.

To be extreme about it, on the far end—like, say, in the villages of Pakistan or Afghanistan—we now view some people as having no rights at all. They can be assassinated or detained indefinitely outside any sort of legal framework, from the Geneva conventions on down.

Even here at home, that concept is growing. After the Boston marathon bombings, there was briefly a controversy where we wondered aloud whether the Chechen suspects would be read Miranda rights upon capture. No matter how angry you were about those bombings—and as a Boston native, I wanted whoever was responsible thrown in the deepest hole we have—it was a fascinating moment in our history. It was the first time when we actually weren’t sure if an American criminal suspect would get full access to due process of law. Even on television, the blow-dried talking heads didn’t know the answer. We had to think about it.

Of course, on the other end of the spectrum are the titans of business, the top executives at companies like Goldman and Chase and GlaxoSmithKline, men and women who essentially as a matter of policy now will never see the inside of a courtroom, almost no matter what crimes they may have committed in the course of their business. This is obviously an outrage, and the few Americans who paid close attention to news stories like the deferred prosecution of HSBC for laundering drug money, or the nonprosecution of the Swiss bank UBS for fixing interest rates, were beside themselves with anger over the unfairness of it all.

But the truly dark thing about those stories is that somewhere far beneath the intellect, on a gut level, those who were paying attention understood why those stories panned out the way they did. Just as we very quickly learned to accept the idea that America now tortures and assassinates certain foreigners (and perhaps the odd American or three) as a matter of routine, and have stopped marching on Washington to protest the fact that these things are done in our names, we’ve also learned to accept the implicit idea that some people have simply more rights than others. Some people go to jail, and others just don’t. And we all get it.

I was originally attracted to this subject because, having spent years covering white-collar corruption for Rolling Stone, I was interested in the phenomenon of high-powered white-collar criminals completely avoiding individual punishment for what appeared to be very serious crimes. It’s become a cliché by now, but since 2008, no high-ranking executive from any financial institution has gone to jail, not one, for any of the systemic crimes that wiped out 40 percent of the world’s wealth. Even now, after JPMorgan Chase agreed to a settlement north of $13 billion for a variety of offenses and the financial press threw itself up in arms over the government’s supposedly aggressive new approach to regulating Wall Street, the basic principle held true: Nobody went to jail. Not one person.

Why was that? I quickly realized that it was impossible to answer that question without simultaneously looking at the question of who does go to jail in this country, and why. This was especially true when the numbers were so stark, zero-to-a-few on one hand, millions on the other.

Finding the answer to some of this turns out to be easy, just simple math. Big companies have big lawyers, most street criminals do not, and prosecutors dread waging long wars against bottomless-pocketed megabanks when they can score win after easy win against common drug dealers, car thieves, and the like. After winning enough of these blowout victories, the justice bureaucracy starts drifting inexorably toward the no-sweat ten-second convictions and away from the expensive years-long battles of courtroom attrition.

Unquestionably, however, something else is at work, something that cuts deeper into the American psyche. We have a profound hatred of the weak and the poor, and a corresponding groveling terror before the rich and successful, and we’re building a bureaucracy to match those feelings.

Buried in our hatred of the dependent, in Mitt Romney’s lambasting of the 47 percent, in the water carrier’s contempt for the water drinker, is a huge national psychological imperative. Many of our national controversies are on some level debates about just exactly how much we should put up with from the “nonproducing” citizenry. Even the George Zimmerman trial devolved into a kind of national discussion over whether Trayvon Martin was the kind of person who had the right to walk down the street unmolested, or whether he was a member of a nuisance class, a few pegs down on that sliding scale of rights, who should have submitted to … well, whatever it was that happened.

The weird thing is that the common justification for the discrepancy in prison statistics—the glaring percentage of incarcerated people who are either poor, nonwhite, or both—is that the ghetto denizens are the people who commit the crimes, that their neighborhoods are where the crime is at.

And the common justification for the failure to prosecute executives in corrupt corporations for any crimes that they might commit is that their offenses aren’t really crimes per se but mere ethical violations, morally unfortunate acts not punishable by law. President Obama himself would hint at this in an infamous 60 Minutes interview.

But in practice, as I would find out in a years-long journey through the American justice system, things turn out to be completely different.

Yes, there’s a lot of violent crime in poor neighborhoods. And yes, that’s where most of your gun violence happens.

But for most of the poor people who are being sent away, whether it’s for a day or for ten years, their prison lives begin when they’re jailed for the most minor offenses imaginable. Can you imagine spending a night in jail for possessing a pink Hi-Liter marker? For rolling a tobacco cigarette? How about for going to the corner store to buy ketchup without bringing an ID?

They are sent away because they do the same things rich people do at some time in their lives, usually as teenagers—get drunk and fall down, use drugs, take a leak in an alley, take a shortcut through someone’s yard, fall asleep in a subway car, scream at a boyfriend or girlfriend, hop a fence. Only when they do these things, they’re surrounded by a thousand police, watching their every move.

Meanwhile the supposedly minor violations that aren’t worth throwing bankers in jail for—they turn out to be not so minor. When an employee at the aforementioned British banking giant HSBC—whose executives were ultimately handed a no-jail settlement for the biggest money-laundering case in the history of banking—started looking into how people on terrorist or criminal watch lists opened accounts at his company, he found something odd. In many cases, commas or periods were being surreptitiously added to names, so that they would elude the bank’s computer screening systems.

“That’s something that could only have been done on purpose, by a bank employee,” he said.

What deserves a bigger punishment—someone with a college education who knowingly helps a gangster or a terrorist open a bank account? Or a high school dropout who falls asleep on the F train?

The new America says it’s the latter. It’s come around to that point of view at the end of a long evolutionary process, in which the rule of law has slowly been replaced by giant idiosyncratic bureaucracies that are designed to criminalize failure, poverty, and weakness on the one hand, and to immunize strength, wealth, and success on the other.

We still have real jury trials, honest judges, and free elections, all the superficial characteristics of a functional, free democracy. But underneath that surface is a florid and malevolent bureaucracy that mostly (not absolutely, but mostly) keeps the rich and the poor separate through thousands of tiny, scarcely visible inequities.

For instance, while the trials may be free and fair, unfair calculations are clearly involved in who gets indicted for crimes, and who does not. Or: Which defendant gets put in jail, and which one gets away with a fine? Which offender ends up with a criminal record, and which one gets to settle with the state without admitting wrongdoing? Which thief will pay restitution out of his own pocket, and which one will be allowed to have the company he works for pay the tab? Which neighborhoods have thousands of police roaming the streets, and which ones don’t have any at all?

This is where the new despotism is hidden, in these thousands of arbitrary decisions that surround our otherwise transparent system of real jury trials and carefully enumerated suspects’ rights. This vast extrademocratic mechanism, it turns out, is made up of injustices big and small, from sweeping national concepts like Eric Holder’s Collateral Consequences plan, granting situational leniency to “systemically important” companies, to smaller, more localized outrages like New York City prosecutors subverting speedy trial rules in order to extract guilty pleas from poor defendants who can’t make bail.

Most people understand this on some level, but they don’t really know how bad it has gotten, because they live entirely on one side of the equation. If you grew up well off, you probably don’t know how easy it is for poor people to end up in jail, often for the same dumb things you yourself did as a kid.

And if you’re broke and have limited experience in the world, you probably have no idea of the sheer scale of the awesome criminal capers that the powerful and politically connected can get away with, right under the noses of the rich-people police.

This is a story that doesn’t need to be argued. You just need to see it, and it speaks for itself. Only we’ve arranged things so that the problem is basically invisible to most people, unless you go looking for it.

I went looking for it.

What If Our Economic System Conflicts With Our Human Nature?

What if much or even all of modern advances and wonders happened in spite of capitalism, not because of it?

What if social democracy and political democracy, if a free economy and a free society is ultimately in complete opposition to everything that has come to be associated with capitalism: hyper-individualism and aggressive competition, consumer-citizenship and worker-citizenship, neoliberal exploitation and resource extraction, theft of the commons and destitution of the masses, education and healthcare disparities, high inequality and economic segregation, rigid hierarchies and permanent under class, inherited and concentrated wealth, cronyism and nepotism, plutocratic ruling elite, psychopathic corporate model, corporatism and corporatocracy, oligopolies and monopolies, fascism and inverted totalitarianism, revolving door between big gov and big biz, law and policy determined by big money lobbyists, elections determined by big money donors, etc?

What then?

Income Accelerates Innovation by Reducing Our Fear of Failure
by Scott Santens

Studies have shown that the very existence of food stamps — just knowing they are there as an option in case of failure — increases rates of entrepreneurship. A study of a reform to the French unemployment insurance system that allowed workers to remain eligible for benefits if they started a business found that the reform resulted in more entrepreneurs starting their own businesses. In Canada, a reform was made to their maternity leave policy, where new mothers were guaranteed a job after a year of leave. A study of the results of this policy change showed a 35% increase in entrepreneurship due to women basically asking themselves, “What have I got to lose? If I fail, I’m guaranteed my paycheck back anyway.”

None of this should be surprising. The entire insurance industry exists to reduce risk. When someone is able to insure something, they are more willing to take risks. Would there be as many restaurants if there was no insurance in case of fire? Of course not. The corporation itself exists to reduce personal risk. Entrepreneurship and risk are inextricably linked. Reducing risk aversion is paramount to innovation.

Such market effects have even been observed in universal basic income experiments in Namibia and India where local markets flourished thanks to a tripling of entrepreneurs and the enabling of everyone to be a consumer with a minimum amount of buying power.

Children’s Helping Hands
Felix Warneken

Young children are also willing to put some effort into helping. Further studies showed that they continue to help over and over again, even if they have to surmount an array of obstacles to pick up a dropped object or stop playing with an interesting toy. We had to be inventive in creating distracting toys that might lower their tendency to help— flashy devices that lit up and played music; colorful boxes that jingled when you threw a toy cube into them and shot it out the other end. We decided that if we couldn’t sell the scientific community on our results, we could at least go into the toy business.

As noted, the behavior of our little subjects did not seem to be driven by the expectation of praise or material reward. In several studies, the children’s parents weren’t in the room, and thus the helping cannot be explained by their desire to look good in front of Mom. In one study, children who were offered a toy for helping were no more likely to help than those children who weren’t. In fact, material rewards can even have a detrimental effect on helping: During the initial phase of another experiment, half the children received a reward for helping and the other half did not. Subsequently, when the children again had the opportunity to help but now without a reward being offered to those in either group, the children who had been rewarded initially were less likely to help spontaneously than the children from the no-reward group. This perhaps surprising result suggests that children’s helping is intrinsically motivated rather than driven by the expectation of material reward. Apparently, if such rewards are offered, they can change children’s original motivation, causing them to help only because they expect to receive something for it.

The Case Against Rewards and Praise
A Conversation with Alfie Kohn

by Sara-Ellen Amster

Rewards kill creativity. Some twenty studies have shown that people do inferior work when they are expecting to get a reward for doing it, as compared with people doing the same task without any expectation of a reward. That effect is most pronounced when creativity is involved in the task.

Rewards undermine risk-taking. When I have been led to think of the “A” or the sticker or the dollar that I’m going to get, I do as little as I have to, using the most formulaic means at my disposal, to get through the task so I can snag the goody. I don’t play with possibilities. I don’t play hunches that might not pay off. I don’t attend to incidental stimuli that might or might not turn out to be relevant. I just go for the gold. Studies show that people who are rewarded tend to pick the easiest possible task. When the rewards are removed, we tend to prefer more challenging things to do. Everyone has seen students cut corners and ask: “Do we have to know this? Is this going to be on the test?”

But we have not all sat back to reflect on why this happens. It’s not laziness. It’s not human nature. It’s because of rewards. If the question is “Do rewards motivate students? The answer is “Absolutely. They motivate students to get rewards.” And that’s typically at the expense of creativity.

Rewards undermine intrinsic motivation. At least seventy studies have shown that people are less likely to continue working at something once the reward is no longer available, compared with people who were never promised rewards in the first place. The more I reward a child with grades, for example, the less appeal those subjects will have to the child. It is one of the most thoroughly researched findings in social psychology, yet it is virtually unknown among educational psychologists, much less teachers and parents.

Is Shame Necessary?
by Jennifer Jacquet
Kindle Locations 626-640

Some evidence from work on moral licensing disagrees with this assumption that buying green is a good first step. People who buy eco-products can apparently more easily justify subsequent greed, lying, and stealing. A 2009 study showed that participants who were exposed to green products in a computer-simulated grocery store acted more generously in experiments that followed, but that participants who actually purchased green products over conventional ones then behaved more selfishly. 7 A 2013 study confirmed suspicions about slacktivism when research showed that people who undertook token behaviors to present a positive image in front of others— things like signing a petition or wearing a bracelet or “liking” a cause— were less likely to engage with the cause in a meaningful way later than others who made token gestures that were private. 8 This research suggests that linking “green” to conspicuous consumption might be a distraction and lead to less engagement later on. If this is true, we should not be encouraged to engage with our guilt as disenfranchised consumers, capable of making a change only through our purchases, and instead encouraged to engage as citizens. Markets might even undermine norms for more serious environmental behavior. In some cases, as has been noted in Western Australia, eco-labeling fisheries may even be giving fishing interests leverage against establishing marine protected areas, where fishing would be prohibited or more heavily regulated, on the grounds that protection is not needed if the fisheries in those areas are already labeled eco-friendly. 9 The market for green products might sedate our guilt without providing the larger, serious outcomes we really desire.

Strange Contagion
by Lee Daniel Kravetz
Kindle Locations 1157-1169

Grant’s research is at the forefront of work motivation and leadership. Oddly, despite teaching in a school dominated by economists, he’s landed at a surprising place in terms of the one social contagion he grudgingly propagates. “The study of economics pushes people toward a selfish extreme,” he tells me after his class lets out. More to the point, he says, “The scholarship of economics is responsible for spreading a contagion of greed.”

The Cornell University economist Robert H. Frank has discovered many examples of this, Grant says. Consider that professors of economics give less to charity than professors in other fields. Or that students of economics are more likely to practice deception for personal gain. Then there’s the fact that students majoring in economics routinely rate greed as generally good, correct, and moral. In fact, says Grant, simply thinking about economics chips away at one’s sense of compassion for others. Studying economics also makes people become less giving and more cynical. Students who rank high in self-interest might self-select for degrees and careers in economics-related fields, but by learning about economics they wind up catching more extreme beliefs than those they possess when they first register for class. By spending time with like-minded people who believe in and act on the principle of self-interest, students of economics can become convinced that selfishness is widespread and rational. Self-interest becomes the norm. Individual players within the whole unconsciously model and catch behaviors, in turn pushing ethical standards.

White-on-White Violence, Cultural Genocide, and Historical Trauma

“What white bodies did to Black bodies they did to other white bodies first.”
~ Janice Barbee

How Racism Began as White-on-White Violence
by Resmaa Menakem

Yet this brutality did not begin when Black bodies first encountered white ones. This trauma can be traced back much further, through generation upon generation of white bodies, to medieval Europe.

When the Europeans came to America after enduring 1000 years of plague, famine, inquisitions, and crusades they brought much of their resilience, much of their brutality, and, I believe, a great deal of their trauma with them. Common punishments in the “New World” English colonies were similar to the punishments meted out in England, which included whipping, branding, and cutting off ears. People were routinely placed in stocks or pillories, or in the gallows with a rope around their neck. In America, the Puritans also regularly murdered other Puritans who were disobedient or found guilty of witchery.

In such ways, powerful white bodies routinely punished less powerful white bodies. In 1692, during the Salem witch trials, eighty-year-old Giles Corey was stripped naked and, over a period of two days, slowly crushed to death under a pile of rocks.

We know that the English in America, and their descendants, dislodged brains, blocked airways, ripped muscle, extracted organs, cracked bones, and broke teeth in the bodies of many Black people,Native peoples and other white colonists. But what we often fail to recognize about this “New World” murder, cruelty, oppression, and torture is that, until the second half of the seventeenth century, these traumas were inflicted primarily on white bodies by other white bodies — all on what would become US soil. […]

Throughout the United States’ history as a nation, white bodies have colonized, oppressed, brutalized, and murdered Black and Native ones. But well before the United States began, powerful white bodies colonized, oppressed, brutalized, and murdered other, less powerful white ones.

The carnage perpetrated on Black people and Native Peoples in the “New World” began, on the same soil, as an adaptation of longstanding white-on-white traumatic retention strategies and brutal class practices. This brutalization created trauma that has yet to be healed among American bodies of all hues today.

Chinese Social Political Stability Rests in “Dual Faceted Identity System” (A Model Societal System Analysis based on Recent Rise of White Nationalism in US)
by killingzoo

Equally interesting, while some minority groups in US seem to become more unhappy as they gained power, Asians in general still has little political influence in US, and yet remained very calm.

The clue laid in some worst examples: Kevin Yee, the 3rd Generation Chinese American neo-Nazi supporter, and Adolf Hitler himself (who had a Jewish grandmother).

1 friend said to me: These neo-Nazi “White nationalists”. They don’t even know who they are (where they came from)

Same problem with Yee and Hitler: They forgot (or never knew) their own heritage, so they convinced themselves to follow/worship a mythical “White” identity that really never existed. “White” is just the color of their skin, it doesn’t tell them anything about where their ancestors came from.

Heck, some neo-Nazis probably also had Native American and African slave bloodlines in their families!

The Monolithic “assimilation” in America has forced too many Americans to integrate and forget their own native culture and their native languages of many sides.

The opposite examples are the “hyphenated Americans”, Chinese Americans, Jewish Americans, etc..

The “hyphenation” denoted a multi-faceted identity of these groups. Chinese Americans are known for strongly preserving their Chinese culture and language, even as they integrated into US political economic processes.

Being “hyphenated” multi-faceted in identity has the benefit of greater tolerance for the “others”. As such groups recognize that they came from elsewhere, they tend to give higher tolerance to those who are different, or who are new to US, because a Chinese American himself is also different from many other Americans.

It’s hardly sensible for a Chinese American to demand a new immigrant to “speak proper English”, when others could easily make jokes about his accent. (Though Kevin Yee might do so).

For this reason, many hyphenated American groups with strong multi-faceted identities tend to be very tolerant, less inclined to feel that they are under threat from other groups, and more likely to be liberals in political social views, even if they are conservative in fiscal beliefs. For example, Jewish Americans are typically conservative fiscally but liberal socially. Similarly, Mormons (with their religious enclaves in Utah), tend to be conservative, but very welcome of immigrants. Mennonites of Dutch origin, also tend to be conservative in lifestyle, and yet hold some very liberal tolerant and very friendly views of others.

Epigenetic Memory and the Mind

Epigenetics is fascinating, even bizarre by conventional thought. Some worry that it’s another variety of determinism, just not located in the genes. I have other worries, if not that particular one.

How epigenetics work is that a gene gets switched on or off. The key point is that it’s not permanently set. Some later incident, conditions, behavior, or whatever can switch it back the other way again. Genes in your body are switched on and off throughout your lifetime. But presumably if no significant changes occur in one’s life some epigenetic expressions remain permanently set for your entire life.

Where it gets fascinating is that it’s been proven that epigenetics gets passed on across multiple generations and no one is certain how many generations. In mice, it can extend at least upwards of 7 generations or so, as I recall. Humans, of course, haven’t been studied for that many generations. But present evidence indicates it operates similarly in humans.

Potentially, all of the major tragedies in modern history (violence of colonialism all around the world, major famines in places like Ireland and China, genocides in places like the United States and Rwanda, international conflicts like the world wars, etc), all of that is within the range of epigenetis. It’s been shown that famine, for example, switches genes for a few generations that causes increased fat retention and in the modern world that means higher obesity rates.

I’m not sure what is the precise mechanism that causes genes to switch on and off (e.g., precisely how does starvation get imprinted on biology and become set that way for multiple generations). All I know is it has to do with the proteins that encase the DNA. The main interest is that, once we do understand the mechanism, we will be able to control the process. This might be a way of preventing or managing numerous physical and psychiatric health conditions. So, it really will mean the opposite of determinism.

This research reminds me of other scientific and anecdotal evidence. Consider the recipients of organ transplants, blood and bone marrow transfusions, and microbiome transference. This involves the exchange of cells from one body to another. The results have shown changes in mood, behavior, biological functioning, etc

For example, introducing a new microbiome can make a skinny rodent fat or a fat rodent skinny. But also observed are shifts in fairly specific memories, such as an organ transplant recipient craving something the organ donor craved. Furthermore, research has shown that genetics can jump from the introduced cells to the already present cells, which is how a baby can potentially end up with the cells of two fathers if a previous pregnancy was by a different father, and actually it’s rather common for people to have multiple DNAs in their body.

It intuitively makes sense that epigenetics would be behind memory. It’s easy to argue that there is no other function in the body that has this kind and degree of capacity. And that possibility would blow up our ideas of the human mind. In that case, some element of memories would get passed on multiple generations, explaining certain similarities seen in families and larger populations with shared epigenetic backgrounds.

This gives new meaning to the theories of both the embodied mind and the extended mind. There might also having some interesting implications for the bundle theory of mind. I wonder too about something like enactivism which is about the human mind’s relation to the world. Of course, there are obvious connections of this specific research with neurological plasticity and of epigenetics more generally with intergenerational trauma.

So, it wouldn’t only be the symptoms of trauma or else the benefits of privilege (or whatever other conditions that shape individuals, generational cohorts, and sub-populations) being inherited but some of the memory itself. This puts bodily memory in a much larger context, maybe even something along the lines of Jungian thought, in terms of collective memory and archetypes (depending on how long-lasting some epigenetic effects might be). Also, much of what people think of as cultural, ethnic, and racial differences might simply be epigenetics. This would puncture an even larger hole in genetic determinism and race realism. Unlike genetics, epigenetics can be changed.

Our understanding of so much is going to be completely altered. What once seemed crazy or unthinkable will become the new dominant paradigm. This is both promising and scary. Imagine what authoritarian governments could do with this scientific knowledge. The Nazis could only dream of creating a superman. But between genetic engineering and epigenetic manipulations, the possibilities are wide open. And right now, we have no clue what we are doing. The early experimentation, specifically research done covertly, is going to be of the mad scientist variety.

These interesting times are going to get way more interesting.

* * *

Could Memory Traces Exist in Cell Bodies?
by Susan Cosier

The finding is surprising because it suggests that a nerve cell body “knows” how many synapses it is supposed to form, meaning it is encoding a crucial part of memory. The researchers also ran a similar experiment on live sea slugs, in which they found that a long-term memory could be totally erased (as gauged by its synapses being destroyed) and then re-formed with only a small reminder stimulus—again suggesting that some information was being stored in a neuron’s body.

Synapses may be like a concert pianist’s fingers, explains principal investigator David Glanzman, a neurologist at U.C.L.A. Even if Chopin did not have his fingers, he would still know how to play his sonatas. “This is a radical idea, and I don’t deny it: memory really isn’t stored in synapses,” Glanzman says.

Other memory experts are intrigued by the findings but cautious about interpreting the results. Even if neurons retain information about how many synapses to form, it is unclear how the cells could know where to put the synapses or how strong they should be—which are crucial components of memory storage. Yet the work indeed suggests that synapses might not be set in stone as they encode memory: they may wither and re-form as a memory waxes and wanes. “The results are really just kind of surprising,” says Todd Sacktor, a neurologist at SUNY Downstate Medical Center. “It has always been this assumption that it’s the same synapses that are storing the memory,” he says. “And the essence of what [Glanzman] is saying is that it’s far more dynamic.”

Memory Transferred Between Snails, Challenging Standard Theory of How the Brain Remembers
by Usha Lee McFarling

Glanzman’s experiments—funded by the National Institutes of Health and the National Science Foundation—involved giving mild electrical shocks to the marine snail Aplysia californica. Shocked snails learn to withdraw their delicate siphons and gills for nearly a minute as a defense when they subsequently receive a weak touch; snails that have not been shocked withdraw only briefly.

The researchers extracted RNA from the nervous systems of snails that had been shocked and injected the material into unshocked snails. RNA’s primary role is to serve as a messenger inside cells, carrying protein-making instructions from its cousin DNA. But when this RNA was injected, these naive snails withdrew their siphons for extended periods of time after a soft touch. Control snails that received injections of RNA from snails that had not received shocks did not withdraw their siphons for as long.

“It’s as if we transferred a memory,” Glanzman said.

Glanzman’s group went further, showing that Aplysia sensory neurons in Petri dishes were more excitable, as they tend to be after being shocked, if they were exposed to RNA from shocked snails. Exposure to RNA from snails that had never been shocked did not cause the cells to become more excitable.

The results, said Glanzman, suggest that memories may be stored within the nucleus of neurons, where RNA is synthesized and can act on DNA to turn genes on and off. He said he thought memory storage involved these epigenetic changes—changes in the activity of genes and not in the DNA sequences that make up those genes—that are mediated by RNA.

This view challenges the widely held notion that memories are stored by enhancing synaptic connections between neurons. Rather, Glanzman sees synaptic changes that occur during memory formation as flowing from the information that the RNA is carrying.

Stress Is Real, As Are The Symptoms

I was reading a book, Strange Contagion by Lee Daniel Kravetz, where he dismisses complaints about wind turbines (e.g. low frequency sounds). It’s actually a great read, even as I disagree with elements of it, such as his entirely overlooking of inequality as a cause of strange contagions (public hysteria, suicide clusters, etc) — an issue explored in depth by Keith Payne in The Broken Ladder and briefly touched upon by Kurt Andersen in Fantasyland.

By the way, one might note that where wind farms are located, as with where toxic dumps are located, has everything to do with economic, social, and political disparities — specifically as exacerbated by poverty, economic segregation, residential isolation, failing local economies, dying small towns, inadequate healthcare, underfunded or non-existent public services, limited coverage in the corporate media, underrepresentation in positions of power and authority, etc (many of the things that get dismissed in defense of the establishment and status quo). And one might note that the dismissiveness toward inequality problems has strong resemblances to the dismissiveness toward wind turbine syndrome or wind farm syndrome.

About wind turbines, Kravetz details the claims against them in writing that, “People closest to the four-hundred-foot-tall turrets receive more than just electricity. The turbines interrupt their sleep patterns. They also generate faint ringing in their ears. Emissions cause pounding migraine headaches. The motion of the vanes also creates a shadow flicker that triggers disorientation, vertigo, and nausea” (Kindle Locations 959-961). But he goes onto assert that the explanation of cause is entirely without scientific substantiation, even as the symptoms are real:

“Grievances against wind farms are not exclusive to DeKalb County, with a perplexing illness dogging many a wind turbine project. Similar complaints have surfaced in Canada, the UK, Italy, and various US cities like Falmouth, Massachusetts. In 2009 the Connecticut pediatrician Nina Pierpont offered an explanation. Wind turbines, she argued, produce low-frequency noises that induce disruptions in the inner ear and lead to an illness she calls wind turbine syndrome. Her evidence, now largely discredited for sample size errors, a lack of a control group, and no peer review, seemed to point to infrasound coming off of the wind farms. Since then more than a dozen scientific reviews have firmly established that wind turbines pose no unique health risks and are fundamentally safe. It doesn’t seem to matter to the residents of DeKalb County, whose symptoms are quite real.” (Kindle Locations 961-968)

He concludes that it is “wind farm hysteria”. It is one example he uses in exploring the larger issue of what he calls strange contagions, partly related to Richard Dawkin’s theory of memes, although he considers it more broadly to include the spread of not just thoughts and ideas but emotions and behaviors. Indeed, he makes a strong overall case in his book and I’m largely persuaded or rather it fits the evidence I’ve previously seen elsewhere. But sometimes his focus is too narrow and conventional. There are valid reasons to consider wind turbines as potentially problematic for human health, despite our not having precisely ascertained and absolutely proven the path of causation.

Stranger Dimensions put out an article by Rob Schwarz, Infrasound: The Fear Frequency, that is directly relevant to the issue. He writes that, “Infrasound is sound below 20 Hz, lower than humans can perceive. But just because we don’t consciously hear it, that doesn’t mean we don’t respond to it; in certain individuals, low-frequency sound can induce feelings of fear or dread or even depression. […] In humans, infrasound can cause a number of strange, seemingly inexplicable effects: headaches, nausea, night terrors and sleep disorders.”

Keep in mind that wind turbines do emit infrasound. The debate has been on whether infrasound can cause ‘disease’ or mere irritation and annoyance. This is based on a simplistic and uninformed understanding of stress. A wide array of research has already proven beyond any doubt that continuous stress is a major contributing factor to numerous physiological and psychological health conditions, and of course this relates to high levels of stress in high inequality societies. In fact, background stress when it is ongoing, as research shows, can be more traumatizing over the long-term than traumatizing events that are brief. Trauma is simply unresolved stress and, when there are multiple stressors in one’s environment, there is no way to either confront it or escape it. It is only some of the population that suffers from severe stress, because of either a single or multiple stressors, but stress in general has vastly increased — as Kravetz states in a straightforward manner: “Americans, meanwhile, continue to experience more stress than ever, with one study I read citing an increase of more than 1,000 percent in the past three decades” (Kindle Locations 2194-2195).

The question isn’t whether stress is problematic but how stressful is continuous low frequency sound, specifically when combined with other stressors as is the case for many disadvantaged populations near wind farms — plus, besides infrasound, wind turbines are obtrusive with blinking lights along with causing shadow flicker and rhythmic pressure pulses on buildings. No research so far has studied the direct influence of long-term, even if low level, exposure to multiple and often simultaneous stressors and so there is no way for anyone to honestly conclude that wind turbines aren’t significantly contributing to health concerns, at least for those already sensitized or otherwise in a state of distress (which would describe many rural residents near wind farms, considering communities dying and young generations leaving, contributing to a loss of social support that otherwise would lessen the impact of stress). Even the doubters admit that it has been proven that wind turbines cause annoyance and stress, the debate being over how much and what impact. Still, that isn’t to argue against wind power and for old energy industries like coal, but maybe wind energy technology could be improved which would ease our transition to alternative energy.

It does make one wonder what we don’t yet understand about how not easily observed factors can have significant influence over us. Human senses are severely limited and so we are largely unaware of the world around us, even when it is causing us harm. The human senses can’t detect tiny parasites, toxins, climate change, etc. And the human tendency is to deny the unknown, even when it is obvious something is going on. It is particularly easy for those not impacted to dismiss those impacted, such as middle-to-upper class citizens, corporate media, government agencies, and politicians ignoring the severe lead toxicity rates for mostly poor minorities in old industrial areas. Considering that, maybe scientists who do research and politicians who pass laws should be required to live for several years surrounded by lead toxicity and wind turbines. Then maybe the symptoms would seem more real and we might finally find a way to help those harmed, if only to reduce some of risk factors, including stress.

The article by Schwarz went beyond this. And in doing so, went in an interesting direction. He explains that, “If infrasound hits at just the right strength and frequency, it can resonate with human eyes, causing them to vibrate. This can lead to distorted vision and the possibility of “ghost” sightings. Or, at least, what some would call ghost sightings. Infrasound may also cause a person to “feel” that there’s an entity in the room with him or her, accompanied by that aforementioned sense of dread.” He describes an incident in a laboratory that came to have a reputation for feeling haunted, the oppressive atmosphere having disappeared when a particular fan was turned off. It turns out it was vibrating at just the right frequency to produce a particular low frequency sound. Now, that is fascinating.

This reminds me of Fortean observations. It’s been noted by a number of paranormal and UFO researchers, such as John Keel, that various odd experiences tend to happen in the same places. UFOs are often repeatedly sighted by different people in the same locations and often at those same locations there will be bigfoot sightings and accounts of other unusual happenings. Jacques Vallee also noted that the certain Fortean incidents tend to follow the same pattern, such as numerous descriptions of UFO abductions matching the folktales about fairy abductions and the anthropological literature on shamanistic initiations.

Or consider what sometimes are called fairy lights. No one knows what causes them, but even scientists have observed them. There are many sites that are specifically known for their fairy lights. My oldest brother went to one of those places and indeed he saw the same thing that thousands of others had seen. The weird thing about these balls of light is it is hard to discern exactly where they are in terms of distance from you, going from seeming close to seeming far. It’s possible that there is nothing actually there and instead it is some frequency affecting the brain.

Maybe there is a diversity of human experiences that have common mechanisms or involve overlapping factors. In that case, we simply haven’t yet figured them out yet. But improved research methods might allow us to look more closely at typically ignored and previously unknown causes. Not only might this lead to betterment for the lives of many but also greater understanding of the human condition.

The Madness of Reason

Commenting online brings one in contact with odd people. It is often merely irritating, but at times it can be fascinating to see all the strange ways humanity gets expressed.

I met a guy, Naj Ziad, on the Facebook page for a discussion group about Julian Jaynes’ book, The Origin of Consciousness in the Breakdown of the Bicameral Mind. He posted something expressing his obsession with logical coherence and consistency. We dialogued for quite a bit, including in a post on his own Facebook page, and he seemed like a nice enough guy. He came across as being genuine in his intentions and worldview, but there was simply something off about him.

It’s likely he has Asperger’s, of a high IQ and high functioning variety. Or it could be that he has some kind of personality disorder. Either way, my sense is that he is severely lacking in cognitive empathy, although I doubt he is deficient in affective empathy. He just doesn’t seem to get that other people can perceive and experience the world differently than he does or that others entirely exist as separate entities apart from his own existence.

When I claimed that my worldview was simply different than his and that neither of our personal realities could be reduced to the other, he called me a dualist. I came to the conclusion that this guy was a solipsist, although he doesn’t identify that way. Solipsism was the only philosophy that made sense of his idiosyncratic ramblings, entirely logical ramblings I might add. He is obviously intelligent, clever, and reasonably well read. His verbal intelligence is particularly high.

In fact, he is so obsessed with his verbal intelligence that he has come to the conclusion that all of reality is language. Of course, he has his own private definition of language which asserts that language is everything. This leaves his argument as a tautology and he freely admitted this was the case, but he kept returning to his defense that his argument was logically consistent and coherent. Sure.

It was endlessly amusing. He really could not grasp that the way his mind operates isn’t how everyone’s mind operates, and so he couldn’t escape the hermetically-sealed reality tunnel of his own clever monkey mind. His world view is so perfectly constructed and orderly that there isn’t a single crack to let in fresh air or a beam of light.

He was doing a wondrous impression of Spock in being entirely logical within his narrow psychological and ideological framework. He kept falling back on his being logical and also his use of idiosyncratic jargon. He defined his terms to entirely fit his ideological worldview that those terms had no meaning to him outside of his ideological worldview. It all made perfect sense within itself.

His life philosophy is a well-rehearsed script that he goes on repeating. It is an amazing thing to observe as an outsider, especially considering he stubbornly refused to acknowledge that anything could be outside of his own mind for he couldn’t imagine the world being different than his own mind. He wouldn’t let go of his beliefs about reality, like the monkey with his hand trapped in a jar because he won’t let go of the banana.

If this guy was just insane or a troll, I would dismiss him out of hand. But that isn’t the case. Obviously, he is neuroatypical and I won’t hold that against anyone. And I freely admit that his ideological worldview is logically consistent and coherent, for whatever that is worth.

What made it so fascinating to my mind is that solipsism has always been a speculative philosophy, to be considered only as a thought experiment. It never occurred to me that there would be a highly intelligent and rational person who would seriously uphold it as an entire self-contained worldview and lifestyle. His arguments for it were equally fascinating and he had interesting thoughts and insights, some of which I even agreed with. He is a brilliant guy who, as sometimes happens, has gone a bit off the deep end.

He built an ideology that perfectly expresses and conforms to his highly unusual neurocognitive profile. And of course, in pointing this out to him, he dismisses it as ‘psychologizing’. His arguments are so perfectly patched together that he never refers to any factual evidence as support for his ideological commitments, as it is entirely unnecessary in his own mind. External facts in the external world, what he calls ‘scientism’, are as meaningless as others claiming to have existence independent of his own. From his perspective, there is only what he calls the ‘now’ and there can only be one ‘now’ to rule them all, which just so happens to coincide with his own ego-mind.

If you challenge him on any of this, he is highly articulate in defending why he is being entirely reasonable. Ah, the madness of reason!

* * *

On a personal note, I should make clear that I sympathize with this guy. I have my own psychological issues (depression, anxiety, thought disorder, strong introversion, etc) that can make me feel isolated and cause me to retreat further into myself. Along with a tendency to over-intellectualize everything, my psychological issues have at times led me to get lost in my own head.

I can even understand the attraction of solipsism and, as a thought experiment, I’ve entertained it. But somehow I’ve always known that I’m not ‘normal’, which is to say that others are not like me. I have never actually doubted that others not only exist but exist in a wide variety of differences. It hasn’t occurred to me to deny all otherness by reducing all others to my own psychological experience and ideological worldview. I’ve never quite been that lost in myself, although I could imagine how it might happen. There have been moments in my life where my mind could have gone off the deep end.

Yet my sympathy only goes so far. It is hard to sympathize with someone who refuses to acknowledge your independent existence as a unique human being with your own identity and views. There is an element of frustration in dealing with a solipsist, but in this case my fascination drew me in. Before I ascertained he was a solipsist, it was obvious something about him was highly unusual. I kept poking and prodding him until the shape of his worldview became apparent. At that point, my fascination ended. Any further engagement would have continued to go around in circles, which means watching this guy’s mind go around in circles like a dog chasing its own tail.

Of all the contortions the human mind can put itself into, solipsism has to be one of the greatest feats to accomplish. I have to give this guy credit where its due. Not many people could keep up such a mindset for long.

Teen Unemployment

I came across an article about teens and unemployment, Jacob Passy’s Record low unemployment doesn’t mean teens will find summer jobs. There is nothing particularly insightful about the article, but it got me thinking. It occurred to me that those considered working age has changed immensely over time.

In the past, if you could walk and grasp objects with your hands, you were working age. But then child labor was made illegal. Small family farms also used to informally employ many young people and yet the once common practice of hiring out one’s children as farm labor has also become illegal. For a period of time, new areas of work emerged for youth workers: babysitting, paper routes, fast food restaurants, etc. Even those jobs are disappearing for youth as job scarcity is forcing older workers to take those positions, along with other shifts in laws and the economy.

“Overall, far fewer teens are looking for work these days. The labor-force participation rate, a measure of the share of people with jobs or looking for employment, was 35% for teens last July. Comparatively in 2000, when the U.S. economy last came close to achieving full employment, the labor-force participation rate for this group was nearly 53%. […]

“Fewer teens are able to find the types of jobs that were once popular with teen workers. In the late 1990s, one in four food service workers during the summer was a teenager, now that figure is just one in six. Similarly, in 2000 a fifth of retail workers handling sales and customer service in the summer months were teens. These days, that share has dropped to one-seventh of all retail workers.

“Many of the lower-paying jobs in the retail and hospitality sectors that used to be filled by teenagers are now held by foreign-born adults and older workers, including those past retirement age, according to the report from Drexel.”

This has broader implications. As they are not making money, teens aren’t contributing to family income and so are increasingly dependent on parental income. For some families, this would be a decrease in family income. For others, parents would make up the difference by working longer hours. But the latter couldn’t be possible for most, since about half of the working age population is either unemployed or underemployed.

Competition for work is going to get worse over time. Older workers will increasingly dominate employment. This is a trend that has been developing for more than a century. Universal public education intentionally pulled children out of the job market. Increase of school homework and extracurriculars needed to get into college have eliminated much of the free time that teenagers used to work jobs. And increasing college participation is further delaying entry into the workforce.

All of this is putting ever greater pressure on parents as providers. The age of perceived adulthood is being raised. This artificially lowers unemployment. For those who haven’t yet joined the workforce, they aren’t labeled and counted as unemployed. It’s similar to eliminating from unemployment rates people who have given up on looking for a job. It is deceiving to speak of unemployment among job seekers when ever larger numbers for one reason or another are being excluded from the category of job seekers.

Unemployment rates seems like an endless game of number manipulation.

* * *

I showed the article to my father. Here is his response:

“Yep. And in Iowa, kids cannot work until 14, and for the next two years, they cannot work past 7 pm on week days nor over 4 hours, if I recall correctly. So given the need for shift flexibility as people are sick or unexpectedly quit, employers don’t hire them as readily as older employees. Or so the family owner of a drive in place told me. Maybe 16 is the magic age.

“So I guess we don’t encourage them. And there is always a “good” reason for laws that discourage them while the real reasons remain unstated. My brother and I started regular part time work at age 15, and daily paper routes at age 11.”

Like my father, I also started working young. I had a paper route in elementary school that I did before school each morning. I did yard work for neighbors in middle school and high school. It was in 11th grade that I got a job at McDonald’s. It was common for kids and teens to work in the past. I never thought anything about it, as many of my friends had been working since they were kids.

So, if discouraging the young from working is intentional, what is the real reason? Some argue that it’s because education has become treated as work, in being ever more prioritized. And it’s a fact that kids do more homework now and are more likely to get more education. A few generations back, most people didn’t even graduate from high school and so beginning work young was a necessity. But how long can adulthood be delayed and for what purpose.

Maybe we are slowly transitioning toward something like a basic income. It is true that basic income experiments show that, when given a basic income, students are less likely to work while in school and that probably would be a good thing. I wonder, though. Americans are obsessed with the moral value of work as a way of proving your social worth. Plus, work has become a form of social control to keep the masses preoccupied. Older people are working longer and retiring later. And with the phenomenon of increasing bullshit jobs, the disappearance of jobs through automation is far from inevitability.

This past century’s shift has to end at some point. It can’t keep going like this while maintaining the kind of economy we’ve had.

Most Americans Pleasantly Surprised System Hasn’t Collapsed Yet

“How well are things going in the country today: very well, fairly well, pretty badly or very badly?”

That is from a CNN poll. I enjoy looking at polling data. But I must admit a question like this perplexes me. What does this question even mean? What is it specifically asking about? What is being referred to by ‘things’? What was the context in which it was asked? Were there questions that preceded it and framed it?

Here is the breakdown of responses:
Very well 13%
Fairly well 45%
Pretty badly 28%
Very badly 12%

This question has been asked by CNN going back to 2005. When the options of “very well” and “fairly well” are combined, that is a majority of Americans who generally consider ‘things’ to be doing ‘well’. That percentage is higher than it has been in all those years of polling. Even before the 2008 Great Recession, it wasn’t quite that high.

Obviously, most people asked this question weren’t thinking of the president, congress, etc which get low favorable ratings in public opinion. These ratings are at historical lows. Other polling doesn’t make clear that it is about the economy either. The U.S. population is about evenly split over the economy being better now than a year ago. But even that is hard to interpret considering about half the population is unemployed or underemployed. A large part of the population is in poverty or close to it. This is probably how most people think about the economy, as personal experience and not abstract data.

Inequality continues to rise, housing and healthcare and education costs are higher than ever, wages continue to stagnate, few Americans have any retirement savings or even enough money to pay for a major emergency, job security is an endangered species, and good benefits are no longer included as part of the American Dream. Plus, personal and national debt keeps on growing, big banks that were too big to fail that they were bailed out to avoid financial collapse are now even bigger, new kinds of monopoly-like corporations are forming within multiple markets, and related to high inequality a number of serious thinkers including President Jimmy Carter have stated that the United States is now a banana republic.

It probably shouldn’t be interpreted as high praise that the economy is doing slightly less worse or maintaining expected levels of crappiness. In this context, doing well might simply mean that the situation is tolerable enough to not yet incite mass revolt and possibly revolution.

Furthermore, even though supported by many, most Americans don’t think Trump’s tax cut will personally benefit them. As for the future, the population is split three ways about the economy getting better, remaining the same, or getting worse (causing one to wonder, since some of those who support the tax cut apparently don’t believe it will improve the economy for either themselves or other people). The conclusion of things doing well is far from being a straightforward appraisal of confident hope or satisfied contentment.

To consider other areas, I can’t imagine that the majority of the polled believe that U.S. foreign policy is doing well. The war on terror drags on with growing conflict or worsening relations with Russia, Iran, Syria, and other countries. Nuclear threats abound and have received much media attention. And the specter of nuclear war and possibly world war looms in the background. The U.S. military is stuck in permanent occupation of numerous parts of the world. Worse still, the U.S. has never been this hated and mistrusted on the world stage in living memory. Many Americans are feeling embarrassed, ashamed, and defensive about their country’s standing in world opinion.

On a more direct level for Americans, it is clear to everyone that the country is more divided than ever, especially along class and generational lines. For multiple reasons, there are high levels of stress in our society that have been erupting in acts of mass violence and, more generally, has caused a spike in such things as mental illness. Also, there is an opioid epidemic and mortality rates are worsening for multiple demographics: the middle aged, rural whites, rural women, etc. Inequality is growing and everyone knows it, and it is slowly sinking in that inequality is and always was about far more than merely the economy. Big biz, especially big banks, gets about as low of ratings as seen in public polling as does big gov. The favorable ratings of capitalism are quickly dropping while the favorable ratings of socialism are on the rise.

Americans don’t seem particularly optimistic at the moment. Maybe public opinion has to be interpreted as a relative perception in any given moment. Asking people how well things are is asking them how well is it compared to how badly things have been. And maybe the only thing implied in the polling is that many Americans don’t believe or don’t want to believe that it is going to get even worse. It could be that it feels like we are finally bottoming out as a nation and that this is as bad as it can get. If nothing else, Donald Trump as president demonstrates that a complete idiot can be the leader without all of it entirely collapsing, at least not immediately.

I guess some people find it reassuring that the teetering ramshackle of a system somehow miraculously manages to hold together. Maybe, just maybe we will make it. Then again, most of those who state things are doing well could be older Americans who assume that the consequences and costs will be delayed long enough that they will never have to deal with them. It’s not their problem, even as they helped to cause it. Let the young clean up the mess.

Or it could more simply be standard denial without much if any clear thought about where it is all heading. As long as the social order more or less remains intact for the moment, the general mood is that we are doing as well as can be expected under present dire circumstances. Most Americans unlikely want to think beyond that. Whatever it takes to avoid paralyzing despair, that seems to be the prevailing mindset. It’s as good of an interpretation as any other.

* * *

Giving it one more thought, I realized the explanation could be even simpler than any of those above speculations. It could be so simple as to be boring. Let me share the most down-to-earth possibility.

It might come down to timing. The responses are snapshot at a particular moment. It might not even represent public opinion from earlier in the year or public opinion a short while later. All responses came between the second and fifth of this month, a three day period. Whatever happened to be in the news cycle at that moment might have influenced the answer chosen. So, what was going on at the beginning of this month? One of the biggest events widely reported in the news during the prior week and into the polling period was the peace declared between North Korea and South Korea. President Trump, of course, took all credit for it. And, only a few days before CNN did their polling, a crowd of his fans chanted ‘Nobel’ indicating that they thought he deserved the Nobel Peace Prize.

It was a somewhat random event in terms of when it happened. But, as I argued elsewhere, Trump did deserve some credit. He has been so unpredictably crazy that South Koreans, in their own public polling last year, admitted they feared the United States more than they feared North Korea. Whether or not most other Americans wanted to give Trump credit, it would be taken as a positive result to many Americans. For more than a half century, the corporate media as the propaganda wing of the U.S. government has never tired of fear-mongering about North Korea. Any lessening of that fear, even if only momentary, would feel like a relief to many Americans.

That is the one thing that comes to mind in what has been in the news lately. It does coincide perfectly when the question was asked. The only other big thing going on was the investigation into the Russians and the Trump administration. But that has been going on so long that most Americans are now ignoring it as so much hype and noise. Disregarding the investigation and instead considering the Korean peace talk, that would explain a lot. If the question had been asked weeks or months earlier or the Korean peace talk had happened later, it’s possible the poll results would have been skewed the other direction.

The only way such a poll question could be meaningful is if it was asked multiple times throughout the year and maybe averaged out across the entire year or across a president’s entire time in office. But I doubt CNN is all that interested in meaningful results. Such polls aren’t intended to be analyzed in depth. They are just interesting tidbits of data for a news company to throw out. Public polling, after all, offers corporate media a semblance of legitimacy while being a thousand times cheaper to do than investigative journalism. And poll results are easier to put into a short piece with a catchy title, such as in this case: “CNN Poll: Trump approval steady amid rising outlook for the country“. Anything that will attract viewers and advertising dollars.