Who were the Phoenicians?

In modern society, we are obsessed with identity, specifically in terms of categorizing and labeling. This leads to a tendency to essentialize identity, but this isn’t supported by the evidence. The only thing we are born as is members of a particular species, homo sapiens.

What stands out is that other societies have entirely different experiences of collective identity. The most common distinctions, contrary to ethnic and racial ideologies, are those we perceive in the people most similar to us — the (too often violent) narcissism of small differences.

We not only project onto other societies our own cultural assumptions for we also read anachronisms into the past as our way of rationalizing the present. But if we study closely what we know from history and archaeology, there isn’t any clear evidence for ethnic and racial ideology.

The ancient world is more complex than our simple notions.  A good example of this is the people(s) that have been called Phoenicians.

* * *

In Search of the Phoenicians
by Josephine Quinn
pp. 13-17

However, my intention here is not simply to rescue the Phoenicians from their undeserved obscurity. Quite the opposite, in fact: I’m going to start by making the case that they did not in fact exist as a self-conscious collective or “people.” The term “Phoenician” itself is a Greek invention, and there is no good evidence in our surviving ancient sources that these Phoenicians saw themselves, or acted, in collective terms above the level of the city or in many cases simply the family. The first and so far the only person known to have called himself a Phoenician in the ancient world was the Greek novelist Heliodorus of Emesa (modern Homs in Syria) in the third or fourth century CE, a claim made well outside the traditional chronological and geographical boundaries of Phoenician history, and one that I will in any case call into question later in this book.

Instead, then, this book explores the communities and identities that were important to the ancient people we have learned to call Phoenicians, and asks why the idea of being Phoenician has been so enthusiastically adopted by other people and peoples—from ancient Greece and Rome, to the emerging nations of early modern Europe, to contemporary Mediterranean nation-states. It is these afterlives, I will argue, that provide the key to the modern conception of the Phoenicians as a “people.” As Ernest Gellner put it, Nationalism is not the awakening of nations to self-consciousness: it invents nations where they do not exist”. 7 In the case of the Phoenicians, I will suggest, modern nationalism invented and then sustained an ancient nation.

Identities have attracted a great deal of scholarly attention in recent years, serving as the academic marginalia to a series of crucially important political battles for equality and freedom. 8 We have learned from these investigations that identities are not simple and essential truths into which we are born, but that they are constructed by the social and cultural contexts in which we live, by other people, and by ourselves—which is not to say that they are necessarily freely chosen, or that they are not genuinely and often fiercely felt: to describe something as imagined is not to dismiss it as imaginary. 9 Our identities are also multiple: we identify and are identified by gender, class, age, religion, and many other things, and we can be more than one of any of those things at once, whether those identities are compatible or contradictory. 10 Furthermore, identities are variable across both time and space: we play—and we are assigned—different roles with different people and in different contexts, and they have differing levels of importance to us in different situations. 11

In particular, the common assumption that we all define ourselves as a member of a specific people or “ethnic group,” a collective linked by shared origins, ancestry, and often ancestral territory, rather than simply by contemporary political, social, or cultural ties, remains just that—an assumption. 12 It is also a notion that has been linked to distinctive nineteenth-century European perspectives on nationalism and identity, 13 and one that sits uncomfortably with counterexamples from other times and places. 14

The now-discredited categorization and labeling of African “tribes” by colonial administrators, missionaries, and anthropologists of the nineteenth and twentieth centuries provides many well-known examples, illustrating the way in which the “ethnic assumption” can distort interpretations of other people’s affiliations and self-understanding. 15 The Banande of Zaire, for instance, used to refer to themselves simply as bayira (“cultivators” or “workers”), and it was not until the creation of a border between the British Protectorate of Uganda and the Belgian Congo in 1885 that they came to be clearly delineated from another group of bayira now called Bakonzo. 16 Even more strikingly, the Tonga of Zambia, as they were named by outsiders, did not regard themselves as a unified group differentiated from their neighbors, with the consequence that they tended to disperse and reassimilate among other groups. 17 Where such groups do have self-declared ethnic identities, they were often first imposed from without, by more powerful regional actors. The subsequent local adoption of those labels, and of the very concepts of ethnicity and tribe in some African contexts, illustrates the effects that external identifications can have on internal affiliations and self-understandings. 18 Such external labeling is not of course a phenomenon limited to Africa or to Western colonialism: other examples include the ethnic categorization of the Miao and the Yao in Han China, and similar processes carried out by the state in the Soviet Union. 19

Such processes can be dangerous. When Belgian colonial authorities encountered the central African kingdom of Rwanda, they redeployed labels used locally at the time to identify two closely related groups occupying different positions in the social and political hierarchy to categorize the population instead into two distinct “races” of Hutus (identified as the indigenous farmers) and Tutsis (thought to be a more civilized immigrant population). 20 This was not easy to do, and in 1930 a Belgian census attempting to establish which classification should be recorded on the identity cards of their subjects resorted in some cases to counting cows: possession of ten or more made you a Tutsi. 21 Between April and July 1994, more than half a million Tutsis were killed by Hutus, sometimes using their identity cards to verify the “race” of their victims.

The ethnic assumption also raises methodological problems for historians. The fundamental difficulty with labels like “Phoenician” is that they offer answers to questions about historical explanation before they have even been asked. They assume an underlying commonality between the people they designate that cannot easily be demonstrated; they produce new identities where they did not to our knowledge exist; and they freeze in time particular identities that were in fact in a constant process of construction, from inside and out. As Paul Gilroy has argued, “ethnic absolutism” can homogenize what are in reality significant differences. 22 These labels also encourage historical explanation on a very large and abstract scale, focusing attention on the role of the putative generic identity at the expense of more concrete, conscious, and interesting communities and their stories, obscuring in this case the importance of the family, the city, and the region, not to mention the marking of other social identities such as gender, class, and status. In sum, they provide too easy a way out of actually reading the historical evidence.

As a result, recent scholarship tends to see ethnicity not as a timeless fact about a region or group, but as an ideology that emerges at certain times, in particular social and historical circumstances, and, especially at moments of change or crisis: at the origins of a state, for instance, or after conquest, or in the context of migration, and not always even then. 23 In some cases, we can even trace this development over time: James C. Scott cites the example of the Cossacks on Russia’s frontiers, people used as cavalry by the tsars, Ottomans, and Poles, who “were, at the outset, nothing more and nothing less than runaway serfs from all over European Russia, who accumulated at the frontier. They became, depending on their locations, different Cossack “hosts”: the Don (for the Don River basin) Cossacks, the Azov (Sea) Cossacks, and so on.” 24

Ancient historians and archaeologists have been at the forefront of these new ethnicity studies, emphasizing the historicity, flexibility, and varying importance of ethnic identity in the ancient Mediterranean. 25 They have described, for instance, the emergence of new ethnic groups such as the Moabites and Israelites in the Near East in the aftermath of the collapse of the Bronze Age empires and the “crystallisation of commonalities” among Greeks in the Archaic period. 26 They have also traced subsequent changes in the ethnic content and formulation of these identifications: in relation to “Hellenicity,” for example, scholars have delineated a shift in the fifth century BCE from an “aggregative” conception of Greek identity founded largely on shared history and traditions to a somewhat more oppositional approach based on distinction from non-Greeks, especially Persians, and then another in the fourth century BCE, when Greek intellectuals themselves debated whether Greekness should be based on a shared past or on shared culture and values in the contemporary world. 27 By the Hellenistic period, at least in Egypt, the term “Hellene” (Greek) was in official documents simply an indication of a privileged tax status, and those so labeled could be Jews, Thracians—or, indeed, Egyptians. 28

Despite all this fascinating work, there is a danger that the considerable recent interest in the production, mechanisms, and even decline of ancient ethnicity has obscured its relative rarity. Striking examples of the construction of ethnic groups in the ancient world do not of course mean that such phenomena became the norm. 29 There are good reasons to suppose in principle that without modern levels of literacy, education, communication, mobility, and exchange, ancient communal identities would have tended to form on much smaller scales than those at stake in most modern discussions of ethnicity, and that without written histories and genealogies people might have placed less emphasis on the concepts of ancestry and blood-ties that at some level underlie most identifications of ethnic groups. 30 And in practice, the evidence suggests that collective identities throughout the ancient Mediterranean were indeed largely articulated at the level of city-states and that notions of common descent or historical association were rarely the relevant criterion for constructing “groupness” in these communities: in Greek cities, for instance, mutual identification tended to be based on political, legal, and, to a limited extent, cultural criteria, 31 while the Romans famously emphasized their mixed origins in their foundation legends and regularly manumitted their foreign slaves, whose descendants then became full Roman citizens. 32

This means that some of the best-known “peoples” of antiquity may not actually have been peoples at all. Recent studies have shown that such familiar groups as the Celts of ancient Britain and Ireland and the Minoans of ancient Crete were essentially invented in the modern period by the archaeologists who first studied or “discovered” them, 33 and even the collective identity of the Greeks can be called into question. As S. Rebecca Martin has recently pointed out, “there is no clear recipe for the archetypal Hellene,” and despite our evidence for elite intellectual discussion of the nature of Greekness, it is questionable how much “being Greek” meant to most Greeks: less, no doubt, than to modern scholars. 34 The Phoenicians, I will suggest in what follows, fall somewhere in the middle—unlike the Minoans or the Atlantic Celts, there is ancient evidence for a conception of them as a group, but unlike the Greeks, this evidence is entirely external—and they provide another good case study of the extent to which an assumption of a collective identity in the ancient Mediterranean can mislead. 35

pp. 227-230

In all the exciting work that has been done on “identity” in the past few decades, there has been too little attention paid to the concept of identity itself. We tend to ask how identities are made, vary, and change, not whether they exist at all. But Rogers Brubaker and Frederik Cooper have pinned down a central difficulty with recent approaches: “it is not clear why what is routinely characterized as multiple, fragmented, and fluid should be conceptualized as ‘identity’ at all.” 1 Even personal identity, a strong sense of one’s self as a distinct individual, can be seen as a relatively recent development, perhaps related to a peculiarly Western individualism. 2 Collective identities, furthermore, are fundamentally arbitrary: the artificial ways we choose to organize the world, ourselves, and each other. However strong the attachments they provoke, they are not universal or natural facts. Roger Rouse has pointed out that in medieval Europe, the idea that people fall into abstract social groupings by virtue of common possession of a certain attribute, and occupy autonomous and theoretically equal positions within them, would have seemed nonsensical: instead, people were assigned their different places in the interdependent relationships of a concrete hierarchy. 3

The truth is that although historians are constantly apprehending the dead and checking their pockets for identity, we do not know how people really thought of themselves in the past, or in how many different ways, or indeed how much. I have argued here that the case of the Phoenicians highlights the extent to which the traditional scholarly perception of a basic sense of collective identity at the level of a “people,” “culture,” or “nation” in the cosmopolitan, entangled world of the ancient Mediterranean has been distorted by the traditional scholarly focus on a small number of rather unusual, and unusually literate, societies.

My starting point was that we have no good evidence for the ancient people that we call Phoenician identifying themselves as a single people or acting as a stable collective. I do not conclude from this absence of evidence that the Phoenicians did not exist, nor that nobody ever called her- or himself a Phoenician under any circumstances: Phoenician-speakers undoubtedly had a larger repertoire of self-classifications than survives in our fragmentary evidence, and it would be surprising if, for instance, they never described themselves as Phoenicians to the Greeks who invented that term; indeed, I have drawn attention to several cases where something very close to that is going on. Instead, my argument is that we should not assume that our “Phoenicians” thought of themselves as a group simply by analogy with models of contemporary identity formation among their neighbors—especially since those neighbors do not themselves portray the Phoenicians as a self-conscious or strongly differentiated collective. Instead, we should accept the gaps in our knowledge and fill the space instead with the stories that we can tell.

The stories I have looked at in this book include the ways that the people of the northern Levant did in fact identify themselves—in terms of their cities, but even more of their families and occupations—as well as the formation of complex social, cultural, and economic networks based on particular cities, empires, and ideas. These could be relatively small and closed, like the circle of the tophet, or on the other hand, they could, like the network of Melqart, create shared religious and political connections throughout the Mediterranean—with other Levantine settlements, with other settlers, and with local populations. Identifications with a variety of social and cultural traditions is one recurrent characteristic of the people and cities we call Phoenician, and this continued into the Hellenistic and Roman periods, when “being Phoenician” was deployed as a political and cultural tool, although it was still not claimed as an ethnic identity.

Another story could go further, to read a lack of collective identity, culture, and political organization among Phoenician-speakers as a positive choice, a form of resistance against larger regional powers. James C. Scott has recently argued in The Art of Not Being Governed (2009) that self-governing people living on the peripheries and borders of expansionary states in that region tend to adopt strategies to avoid incorporation and to minimize taxation, conscription, and forced labor. Scott’s focus is on the highlands of Southeast Asia, an area now sometimes known as Zomia, and its relationship with the great plains states of the region such as China and Burma. He describes a series of tactics used by the hill people to avoid state power, including “their physical dispersion in rugged terrain, their mobility, their cropping practices, their kinship structure, their pliable ethnic identities . . . their flexible social structure, their religious heterodoxy, their egalitarianism and even the nonliterate, oral cultures.” The constant reconstruction of identity is a core theme in his work: “ethnic identities in the hills are politically crafted and designed to position a group vis-à-vis others in competition for power and resources.” 4 Political integration in Zomia, when it has happened at all, has usually consisted of small confederations: such alliances, he points out, are common but short-lived, and are often preserved in local place names such as “Twelve Tai Lords” (Sipsong Chutai) or “Nine Towns” (Ko Myo)—information that throws new light on the federal meetings recorded in fourth-century BCE Tripolis (“Three Cities”). 5

In fact, many aspects of Scott’s analysis feel familiar in the world of the ancient Mediterranean, on the periphery of the great agricultural empires of Mesopotamia and Iran, and despite all its differences from Zomia, another potential candidate for the label of “shatterzone.” The validity of Scott’s model for upland Southeast Asia itself —a matter of considerable debate since the book’s publication—is largely irrelevant for our purposes; 6 what is interesting here is how useful it might be for thinking about the mountainous region of the northern Levant, and the places of refuge in and around the Mediterranean.

In addition to outright rebellion, we could argue that the inhabitants of the Levant employed a variety of strategies to evade the heaviest excesses of imperial power. 7 One was to organize themselves in small city-states with flimsy political links and weak hierarchies, requiring larger powers to engage in multiple negotiations and arrangements, and providing the communities involved with multiple small and therefore obscure opportunities for the evasion of taxation and other responsibilities—“divide that ye be not ruled,” as Scott puts it. 8 A cosmopolitan approach to culture and language in those cities would complement such an approach, committing to no particular way of doing or being or even looking, keeping loyalties vague and options open. One of the more controversial aspects of Scott’s model could even explain why there is no evidence for Phoenician literature despite earlier Near Eastern traditions of myth and epic. He argues that the populations he studies are in some cases not so much nonliterate as postliterate: “Given the considerable advantages in plasticity of oral over written histories and genealogies, it is at least conceivable to see the loss of literacy and of written texts as a more or less deliberate adaptation to statelessness.” 9

Another available option was to take to the sea, a familiar but forbidding terrain where the experience and knowledge of Levantine sailors could make them and their activities invisible and unaccountable to their overlords further east. The sea also offered an escape route from more local sources of power, and the stories we hear of the informal origins of western settlements such as Carthage and Lepcis, whether or not they are true, suggest an appreciation of this point. A distaste even for self-government could also explain a phenomenon I have drawn attention to throughout the book: our “Phoenicians” not only fail to visibly identify as Phoenician, they often omit to identify at all.

It is striking in this light that the first surviving visible expression of an explicitly “Phoenician” identity was imposed by the Carthaginians on their subjects as they extended state power to a degree unprecedented among Phoenician-speakers, that it was then adopted by Tyre as a symbol of colonial success, and that it was subsequently exploited by Roman rulers in support of their imperial activities. This illustrates another uncomfortable aspect of identity formation: it is often a cultural bullying tactic, and one that tends to benefit those already in power more than those seeking self-empowerment. Modern European examples range from the linguistic and cultural education strategies that turned “peasants into Frenchmen” in the late nineteenth century, 10 to the eugenic Lebensborn program initiated by the Nazis in mid-twentieth-century central Europe to create more Aryan children through procreation between German SS officers and “racially pure” foreign women. 11 Such examples also underline the difficulty of distinguishing between internal and external conceptions of identity when apparently internal identities are encouraged from above, or even from outside, just as the developing modern identity as Phoenician involved the gradual solidification of the identity of the ancient Phoenicians.

It seems to me that attempts to establish a clear distinction between “emic” and “etic” identity are part of a wider tendency to treat identities as ends rather than means, and to focus more on how they are constructed than on why. Identity claims are always, however, a means to another end, and being “Phoenician” is in all the instances I have surveyed here a political rather than a personal statement. It is sometimes used to resist states and empires, from Roman Africa to Hugh O’Donnell’s Ireland, but more often to consolidate them, lending ancient prestige and authority to later regimes, a strategy we can see in Carthage’s Phoenician coinage, the emperor Elagabalus’s installation of a Phoenician sun god at Rome, British appeals to Phoenician maritime power, and Hannibal Qadhafi’s cruise ship.

In the end, it is modern nationalism that has created the Phoenicians, along with much else of our modern idea of the ancient Mediterranean. Phoenicianism has served nationalist purposes since the early modern period: the fully developed notion of Phoenician ethnicity may be a nineteenth-century invention, a product of ideologies that sought to establish ancient peoples or “nations” at the heart of new nation-states, but its roots, like those of nationalism itself, are deeper. As origin myth or cultural comparison, aggregative or oppositional, imperialist and anti-imperialist, Phoenicianism supported the expansion of the early modern nation of Britain, as well as the position of the nation of Ireland as separate and respected within that empire; it helped to consolidate the nation of Lebanon under French imperial mandate, premised on a regional Phoenician identity agreed on between local and French intellectuals, but it also helped to construct the nation of Tunisia in opposition to European colonialism.

Advertisements

Paradoxes of State and Civilization Narratives

Below is a passage from a recent book by James C. Scott, Against the Grain.

The book is about agriculture, sedentism, and early statism. The author questions the standard narrative. In doing so, he looks more closely at what the evidence actually shows us about civilization, specifically in terms of supposed collapses and dark ages (elsewhere in the book, he also discusses how non-state ‘barbarians’ are connected to, influenced by, and defined according to states).

Oddly, Scott never mentions Göbekli Tepe. It is an ancient archaeological site that offers intriguing evidence of civilization preceding and hence not requiring agriculture, sedentism, or statism. As has been said of it, “First came the temple, then the city.” That would seem to fit into the book’s framework.

The other topic not mentioned, less surprisingly, is Julian Jaynes’ theory of bicameralism. Jaynes’ view might complicate Scott’s interpretations. Scott goes into great detail about domestication and slavery, specifically in the archaic civilizations such as first seen with the walled city-states. But Jaynes pointed out that authoritarianism as we know it didn’t seem to exist early on, as the bicameral mind made social conformity possible through non-individualistic identity and collective experience (explained in terms of the hypothesis of archaic authorization).

Scott’s focus is more on external factors. From perusing the book, he doesn’t seem to fully take into account social science research, cultural studies, anthropology, philology, etc. The thesis of the book could have been further developed by exploring other areas, although maybe the narrow focus is useful for emphasizing the central point about agriculture. There is a deeper issue, though, that the author does touch upon. What does it mean to be a domesticated human? After all, that is what civilization is about.

He does offer an interesting take on human domestication. Basically, he doesn’t see that most humans ever take the yoke of civilization willingly. There must be systems of force and control in place to make people submit. I might agree, even as I’m not sure that this is the central issue. It’s less about how people submit in body than how they submit in mind. Whether or not we are sheep, there is no shepherd. Even the rulers of the state are sheep.

The temple comes first. Before civilization proper, before walled city-states, before large-scale settlement, before agriculture, before even pottery, there was a temple. What does the temple represent?

* * *

Against the Grain
by James C. Scott
pp. 22-27

PARADOXES OF STATE AND CIVILIZATION NARRATIVES

A foundational question underlying state formation is how we ( Homo sapiens sapiens ) came to live amid the unprecedented concentrations of domesticated plants, animals, and people that characterize states. From this wide-angle view, the state form is anything but natural or given. Homo sapiens appeared as a subspecies about 200,000 years ago and is found outside of Africa and the Levant no more than 60,000 years ago. The first evidence of cultivated plants and of sedentary communities appears roughly 12,000 years ago. Until then—that is to say for ninety-five percent of the human experience on earth—we lived in small, mobile, dispersed, relatively egalitarian, hunting-and-gathering bands. Still more remarkable, for those interested in the state form, is the fact that the very first small, stratified, tax-collecting, walled states pop up in the Tigris and Euphrates Valley only around 3,100 BCE, more than four millennia after the first crop domestications and sedentism. This massive lag is a problem for those theorists who would naturalize the state form and assume that once crops and sedentism, the technological and demographic requirements, respectively, for state formation were established, states/empires would immediately arise as the logical and most efficient units of political order. 4

These raw facts trouble the version of human prehistory that most of us (I include myself here) have unreflectively inherited. Historical humankind has been mesmerized by the narrative of progress and civilization as codified by the first great agrarian kingdoms. As new and powerful societies, they were determined to distinguish themselves as sharply as possible from the populations from which they sprang and that still beckoned and threatened at their fringes. In its essentials, it was an “ascent of man” story. Agriculture, it held, replaced the savage, wild, primitive, lawless, and violent world of hunter-gatherers and nomads. Fixed-field crops, on the other hand, were the origin and guarantor of the settled life, of formal religion, of society, and of government by laws. Those who refused to take up agriculture did so out of ignorance or a refusal to adapt. In virtually all early agricultural settings the superiority of farming was underwritten by an elaborate mythology recounting how a powerful god or goddess entrusted the sacred grain to a chosen people.

Once the basic assumption of the superiority and attraction of fixed-field farming over all previous forms of subsistence is questioned, it becomes clear that this assumption itself rests on a deeper and more embedded assumption that is virtually never questioned. And that assumption is that sedentary life itself is superior to and more attractive than mobile forms of subsistence. The place of the domus and of fixed residence in the civilizational narrative is so deep as to be invisible; fish don’t talk about water! It is simply assumed that weary Homo sapiens couldn’t wait to finally settle down permanently, could not wait to end hundreds of millennia of mobility and seasonal movement. Yet there is massive evidence of determined resistance by mobile peoples everywhere to permanent settlement, even under relatively favorable circumstances. Pastoralists and hunting-and-gathering populations have fought against permanent settlement, associating it, often correctly, with disease and state control. Many Native American peoples were confined to reservations only on the heels of military defeat. Others seized historic opportunities presented by European contact to increase their mobility, the Sioux and Comanche becoming horseback hunters, traders, and raiders, and the Navajo becoming sheep-based pastoralists. Most peoples practicing mobile forms of subsistence—herding, foraging, hunting, marine collecting, and even shifting cultivation—while adapting to modern trade with alacrity, have bitterly fought permanent settlement. At the very least, we have no warrant at all for supposing that the sedentary “givens” of modern life can be read back into human history as a universal aspiration. 5

The basic narrative of sedentism and agriculture has long survived the mythology that originally supplied its charter. From Thomas Hobbes to John Locke to Giambattista Vico to Lewis Henry Morgan to Friedrich Engels to Herbert Spencer to Oswald Spengler to social Darwinist accounts of social evolution in general, the sequence of progress from hunting and gathering to nomadism to agriculture (and from band to village to town to city) was settled doctrine. Such views nearly mimicked Julius Caesar’s evolutionary scheme from households to kindreds to tribes to peoples to the state (a people living under laws), wherein Rome was the apex, with the Celts and then the Germans ranged behind. Though they vary in details, such accounts record the march of civilization conveyed by most pedagogical routines and imprinted on the brains of schoolgirls and schoolboys throughout the world. The move from one mode of subsistence to the next is seen as sharp and definitive. No one, once shown the techniques of agriculture, would dream of remaining a nomad or forager. Each step is presumed to represent an epoch-making leap in mankind’s well-being: more leisure, better nutrition, longer life expectancy, and, at long last, a settled life that promoted the household arts and the development of civilization. Dislodging this narrative from the world’s imagination is well nigh impossible; the twelve-step recovery program required to accomplish that beggars the imagination. I nevertheless make a small start here.

It turns out that the greater part of what we might call the standard narrative has had to be abandoned once confronted with accumulating archaeological evidence. Contrary to earlier assumptions, hunters and gatherers—even today in the marginal refugia they inhabit—are nothing like the famished, one-day-away-from-starvation desperados of folklore. Hunters and gathers have, in fact, never looked so good—in terms of their diet, their health, and their leisure. Agriculturalists, on the contrary, have never looked so bad—in terms of their diet, their health, and their leisure. 6 The current fad of “Paleolithic” diets reflects the seepage of this archaeological knowledge into the popular culture. The shift from hunting and foraging to agriculture—a shift that was slow, halting, reversible, and sometimes incomplete—carried at least as many costs as benefits. Thus while the planting of crops has seemed, in the standard narrative, a crucial step toward a utopian present, it cannot have looked that way to those who first experienced it: a fact some scholars see reflected in the biblical story of Adam and Eve’s expulsion from the Garden of Eden.

The wounds the standard narrative has suffered at the hands of recent research are, I believe, life threatening. For example, it has been assumed that fixed residence—sedentism—was a consequence of crop-field agriculture. Crops allowed populations to concentrate and settle, providing a necessary condition for state formation. Inconveniently for the narrative, sedentism is actually quite common in ecologically rich and varied, preagricultural settings—especially wetlands bordering the seasonal migration routes of fish, birds, and larger game. There, in ancient southern Mesopotamia (Greek for “between the rivers”), one encounters sedentary populations, even towns, of up to five thousand inhabitants with little or no agriculture. The opposite anomaly is also encountered: crop planting associated with mobility and dispersal except for a brief harvest period. This last paradox alerts us again to the fact that the implicit assumption of the standard narrative—namely that people couldn’t wait to abandon mobility altogether and “settle down”—may also be mistaken.

Perhaps most troubling of all, the civilizational act at the center of the entire narrative: domestication turns out to be stubbornly elusive. Hominids have, after all, been shaping the plant world—largely with fire—since before Homo sapiens. What counts as the Rubicon of domestication? Is it tending wild plants, weeding them, moving them to a new spot, broadcasting a handful of seeds on rich silt, depositing a seed or two in a depression made with a dibble stick, or ploughing? There appears to be no “aha!” or “Edison light bulb” moment. There are, even today, large stands of wild wheat in Anatolia from which, as Jack Harlan famously showed, one could gather enough grain with a flint sickle in three weeks to feed a family for a year. Long before the deliberate planting of seeds in ploughed fields, foragers had developed all the harvest tools, winnowing baskets, grindstones, and mortars and pestles to process wild grains and pulses. 7 For the layman, dropping seeds in a prepared trench or hole seems decisive. Does discarding the stones of an edible fruit into a patch of waste vegetable compost near one’s camp, knowing that many will sprout and thrive, count?

For archaeo-botanists, evidence of domesticated grains depended on finding grains with nonbrittle rachis (favored intentionally and unintentionally by early planters because the seedheads did not shatter but “waited for the harvester”) and larger seeds. It now turns out that these morphological changes seem to have occurred well after grain crops had been cultivated. What had appeared previously to be unambiguous skeletal evidence of fully domesticated sheep and goats has also been called into question. The result of these ambiguities is twofold. First, it makes the identification of a single domestication event both arbitrary and pointless. Second, it reinforces the case for a very, very long period of what some have called “low-level food production” of plants not entirely wild and yet not fully domesticated either. The best analyses of plant domestication abolish the notion of a singular domestication event and instead argue, on the basis of strong genetic and archaeological evidence, for processes of cultivation lasting up to three millennia in many areas and leading to multiple, scattered domestications of most major crops (wheat, barley, rice, chick peas, lentils). 8

While these archaeological findings leave the standard civilizational narrative in shreds, one can perhaps see this early period as part of a long process, still continuing, in which we humans have intervened to gain more control over the reproductive functions of the plants and animals that interest us. We selectively breed, protect, and exploit them. One might arguably extend this argument to the early agrarian states and their patriarchal control over the reproduction of women, captives, and slaves. Guillermo Algaze puts the matter even more boldly: “Early Near Eastern villages domesticated plants and animals. Uruk urban institutions, in turn, domesticated humans.” 9

How accurate was the movie “The Founder” about Ray Kroc and the McDonald brothers?

“Up until the time we sold, there was no mention of Kroc being the founder. If we had heard about it, he would be back selling milkshake machines.”
~ Richard McDonald, The Wall Street Journal, 1991

How accurate was the movie “The Founder” about Ray Kroc and the McDonald brothers? That is a question someone asked at Quora, to which I offered a response.

There is criticism and disagreement, some of it quite strongly held, although mostly about minor details. An example is which person came up with using powder mix for milkshakes and when that happened. But the overall story about the business appears to be accurate. One of the few major points of contention is whether or not there ever was a royalty agreement based on a handshake, as the two sides told different versions. So, it partly depends on which side one considers more credible and trustworthy. Though some claims can be verified from documents, much of what the film is based on comes from various personal accounts.

Other debates are more philosophical, such as the meaning of being a founder. The McDonald brothers founded the business model and franchised it before Kroc partnered with them. They had gone so far as to have already sold franchise rights in Kroc’s hometown, which Kroc had to buy out. What Kroc founded was a real estate empire in owning the land upon which franchises were located, this having been the source of wealth behind McDonald’s becoming an international megacorporation.

See below for more info:

Is the McDonald’s Movie ‘The Founder’ True?
by Lena Finkel

So is the film true-to-life? Well, that’s debatable upon who you ask. Kroc and the McDonald brothers had, shall we say, disagreements about whose idea it was to franchise and use the now-infamous golden arches. Kroc also claims that after he officially took over the franchise, he ran the McDonalds brothers out of business at their original location (which they maintained per their agreement) by opening a brand new McDonald’s across the street — a fact which the McDonald brothers vehemently disagree with.

But the bare bones of the story seem to be accurate. As for the details, it looks like the movie is following Kroc’s account of the events, which makes sense since we’re guessing The Founder had to get certain permissions from the fast food restaurant in order to use his name, etc.

How Accurate is ‘The Founder’? The True Facts About McDonald’s Will Surprise You
by Kayleigh Hughes

[P]retty much everything biographical about Kroc is true. […]

Other details in the film aren’t quite accurate, though. For example, though the film indicates that Kroc himself came up with the idea of franchising McDonald’s, in fact the McDonald’s brothers had already begun franchising the restaurant before they met Kroc. Money reports that they had about six locations by 1954. And while the film suggests Kroc also gave the brothers the idea of the iconic “golden arches,” Business Insider notes that the brothers had architect Stanley Clark Meston design them in 1952. […]

One thing that had to be accurate in The Founder, though, was any representation of McDonald’s, including branding, iconography, and restaurant designs. That’s because, as the New York Times explains, the makers of The Founder were allowed to use McDonald’s iconography as long as it was accurate and did not misrepresent the company. So, production designer Michael Corenblith was meticulous about the set design, using “old photographs, blueprints and other archival material” as well as “under the radar” visits to older McDonald’s restaurants to get exact measurements. The final representations, including two full-sized working McDonald’s restaurants maintained “absolute high fidelity.”

You can rest assured that The Founder is a film whose crazy story is in fact pretty darn accurate.

New Movie ‘The Founder’ Explores Entrepreneurship’s Dark Side Through McDonald’s Origins
by Joan Oleck

The movie starts with the message, “Based on a true story.” How close to the real thing was this?

Very close. The one thing you should know is that every single movie that comes out today that is a true story, historical, is going to say “based on a true story,” for legal reasons. Because, for instance, I’ve got Michael Keaton playing Ray Kroc; I don’t have Ray Kroc playing Ray Kroc. And there are certain scenes where there was no stenographer in the room, so you’re making up dialogue. So, from the legal standpoint, they always say “based on a true story.”

[The Founder, though, was very close to what happened]. The most harrowing lines that come out of Kroc’s mouth are his actual lines: ‘If a competitor was drowning, stick a hose down his throat.’ ‘Business is war.’ ‘Dog eat dog, rat eat rat.’ Those are actual quotes.

The Founder Movie vs True Story of Real Ray Kroc, Dick McDonald
from History vs. Hollywood

Did Ray Kroc renege on his handshake deal to pay the McDonald brothers a percentage of the revenue from the franchises?

Yes. After the brothers refused to give Kroc the original restaurant, he supposedly cheated the brothers out of the 0.5 percent royalty agreement they had been getting, which would have been valued at $15 million a year by 1977 and as high as $305 million a year by 2012 (according to one estimate). In his book, Kroc wrote, “If they [the brothers] had played their cards right, that 0.5 percent would have made them unbelievably wealthy.” Relatives of Richard and Maurice McDonald say that Maurice (Mac) was so distraught that it attributed to his eventual death from heart failure a decade later. -Daily Mail Online

Did Ray Kroc really credit himself with being the founder of McDonald’s?

Yes. After the McDonald brothers sold the company to Ray Kroc in 1961 for $2.7 million, he began to take credit for its birth. “Suddenly, after we sold, my golly, he elevated himself to the founder,” said Richard McDonald in a 1991 interview (Sun Journal). Kroc reinforced his claim of being the founder in his 1977 biography, Grinding It Out: The Making of McDonald’s, in which he largely traces McDonald’s origins to his own first McDonald’s restaurant in Des Plaines, Illinois (it was actually the ninth restaurant overall). However, he does include Dick and Mac and their original restaurant in his book. Kroc didn’t open his Des Plaines restaurant until April 15, 1955, roughly seven years after the McDonald brothers opened the original San Bernardino location in 1948 (The New York Times).

‘The Founder’ and the Complicated True Story Behind the Founding of McDonald’s
by Kerry Close

The sale left Kroc bitterly angry with the McDonald brothers for keeping the original location. He opened a McDonald’s location across the street from the brothers’ original restaurant, forcing them to rename the original burger joint, which didn’t stay in business much longer.

“I ran ’em out of business,” he gleefully told TIME.

That was an angle with which Dick McDonald didn’t exactly agree: “Ray Kroc stated that he forced McDonald Bros, to remove the name McDonald’s from the unit we retained in San Bernardino, Calif. The facts are that we took the name off the building and removed the arches immediately upon the closing of the sale of our company to Kroc and associates in December 1961,” he stated in a letter to the editor that ran several weeks later. “Kroc must have been kidding when he told your reporter that we renamed our unit Mac’s Place. The name we used was The Big M. Ray was also being facetious when he told your reporter that he drove us out of business. My brother and I had retired two years previous to the sale, and were living in Santa Barbara, Calif. We had turned the operation of the San Bernardino unit over to a couple of longtime employees of ours who operated the drive-in for seven years. Ray Kroc was always a great prankster and probably couldn’t resist the temptation to needle me.”

Nevertheless, Kroc proclaimed himself McDonald’s founder. Indeed, the company honored him on its Founder’s Day (and wouldn’t include the McDonald brothers until 1991).

Kroc’s version of the story upset the McDonald brothers after the publication of his 1992 autobiography Grinding it Out: The Making of McDonald’s. In the book, he named the first franchise he opened—in Des Plaines, Ill.—as the first McDonald’s restaurant ever opened.

“Up until the time we sold, there was no mention of Kroc being the founder,” Dick McDonald told the Wall Street Journal in 1991. ”If we had heard about it, he would be back selling milkshake machines.”

Grandson of McDonald’s founder sees film on restaurant chain
by Ray Kelly

He said he was impressed with the film and the portrayals of Kroc (Michael Keaton) and Dick and “Mac” McDonald (Nick Offerman and John Carroll Lynch).

“They took a little creative license, but they stuck very close to the story,” French said. “(Offerman and Lynch) were spot on. They conveyed the charismatic New Englanders that they were.”

The real “Founder”
from CBS News

Jason McDonald French takes pride in what his grandfather created. He reflected on the nostalgic quality of the San Bernardino McDonald’s, and what it means to him: “It’s something that my grandfather over tireless years came up with.”

But there’s something the family rarely talked about: the handshake deal in which Ray Kroc promised the McDonald brothers a half-percent royalty on all future McDonalds proceeds.

The family says he never paid them a cent.

“I think it’s worth, yeah, $100 million a year,” said French. “Yeah, pretty crazy.”

“Is there bitterness about that in your family?”

“No, No. My grandfather was never bitter over it. Why would we be bitter over something that my grandfather wasn’t bitter over?”

“Well, there’s 100 million reasons you could be!” said Tracy.

“Yeah,” French smiled.

For French, seeing his family’s story told on the big screen is its own form of payback.

“We were overjoyed with the fact that the story’s being told the right way and that it’s being historically accurate,” he said. “They did create fast food. They started that from the beginning, and I don’t think they get enough credit for what they actually created.”

Damnation: Rural Radicalism

Damnation is a new show on USA Network (co-produced by Netflix). It’s enjoyable entertainment inspired by history and influenced by literature.

As Phil De Semlyen at Empire summarizes the background of the show, it is “a 1930s saga of big business concerns and poor, struggling families, with possibly a sprinkling of Elmer Gantry-like religious hypocrisy, crime and demagoguery thrown in for good measure. “It’s set in the Great Depression and based on true events,” Mackenzie tells Empire of this heady-sounding mix, “It’s about strikers and strike-breakers in Iowa, almost the Dust Bowl, which is bloody interesting.” A bit Steinbeck-y, then? “Kind of. A little bit more amped than that, but yeah.”” And from a Cleveland.com piece by Mark Dawidziak, the show’s creator Tony Tost explained in an interview that,  “They’re unquestionably two of my favorite writers… The world of John Steinbeck as presented in ‘The Grapes of Wrath,’ ‘Of Mice and Men’ and ‘Cannery Row’ was a big influence, as was Dashiell Hammett’s first novel, ‘Red Harvest,” which is set in a Western mining town. All of that went into the soup when writing ‘Damnation.’ ” In mentioning that interview, Bustle’s Jack O’Keeffe writes that,

While the show’s creator has named The Grapes Of Wrath as a touchstone for the series, it also calls to mind one of the most acclaimed period films of the past decade. The 2007 film There Will Be Blood covers the first three decades of 20th Century America, stopping just shy of the Great Depression. However, the small-town rivalry between a suspicious preacher and a business-minded capitalist that arises in There Will Be Blood seems to mirror the central conflict present in Damnation. Damnation seems to be drawing from some pieces of American fiction about the sociopolitical realities of this particular era.

In an interview with Cleveland.com, Tost admitted that Damnation’s influences don’t stop at Steinbeck or the violent filmography of Quentin Tarantino. Tost also listed iconic western director Sam Peckinpah, the Pulitzer-prize winning novel Gilead, and the non-fiction book Hard Times: An Oral History Of The Great Depression among his many inspirations. While Damnation may have invented the details of its story, the creative forces behind the show seemed to do their homework when it came to capturing an accurate picture of what life was like then.

While many of the show’s influences are set 80 years ago, the most surprising source for Damnation may be 2017. Tost told Cleveland.com in the previously mentioned interview, “If you look at the 1930s — a time when there was increasing distrust in institutions, there was fear of finding meaningful work, there is this onslaught of new technology taking away jobs — the relevance [of the show to 2017 audiences] is almost inescapable.”

In a Fayetteville Flyer interview, Tost describes “it as 1/3 Clint Eastwood, 1/3 John Steinbeck, 1/3 James Ellroy. That is, it takes some characters you’d normally see in a tough western, plops them in the world of Grapes of Wrath, and places them in the sort of pulpy paranoid narrative you see in Ellroy’s novels.” About the research, he says:

It’s a blast. Back in my academic days, my field of study was American literature from 1890 to 1945 and I wrote a dissertation on the influence of new technologies in the 20s and 30s on the American imagination. Then I wrote a book about Johnny Cash which delved into the same time period from a different angle, looking at the music and preachers and myths of Americana. So by the time I came up with Damnation as a TV show, I had a good feel for the period, I think. I’ve done plenty of research since then: oral histories and historical accounts of the period and so forth. We have a person who works on the show who daily does research into various arenas we’re interested in, whether it’s carnivals or bootlegging or pornography or baseball or what have you. Largely, I subscribe to David Milch of Deadwood’s advice: do a ton of research, then forget it, and then use your imagination. So Damnation mingles official history with fiction. I sometimes call it a “speculative history” of the time period.

And about “parallels between that period and today,” he states that there are, “Too many to list. I think that’s one of the things that got us the series order from USA network. Populist anger, fears about technologies and immigrants taking away jobs, fascist tendencies, fears of environmental apocalypse (dust bowl), life and death struggles over who is or isn’t a “real” American. The parallels are often spooky.”

So, even as it follows the general pattern of known history, it doesn’t appear to be based on any specific set of events. It is about the farmer revolts in Iowa during the Great Depression (see 1931 Iowa Cow War, 1932 Farmers’ Holiday Association, & 1933 Wisconsin Milk Strike), the kind of topic demonstrating traditional all-American radicalism that triggers the political right and makes them nostalgic for the pro-capitalist political correctness of corporate media propaganda during the Cold War. But I don’t think the fascist wannabes should get too worried since, as we know from history, the capitalists or rather corporatists defeated that threat from below. The days of a radical working class and of the independent farmer were numbered. The show captures that brief moment when the average American fought against the ruling elite with a genuine if desperate hope as a last stand in defending their way of life, but it didn’t have a happy ending for them.

The USA Network can put out a show like this because capitalism is so entrenched that such history of rebellion no longer feels like a serious threat, although this sense of security might turn out to be false in the long run. Capitalist-loving corporations, of course, will sell anything for a profit, even tv shows about a left-wing populist revolt against capitalists — as Marx put it, “The last capitalist we hang shall be the one who sold us the rope.” The heckling complaints from the right-wing peanut gallery are maybe a good sign, as they are sensing that public opinion is turning against them. But as for appreciating the show, it is irrelevant what you think about the historical events themselves. The show doesn’t play into any simplistic narrative of good vs evil, as characters on both sides have complicated pasts. One is free to root for the capitalists as their goons kill the uppity farmers, if that makes one happy.

As for myself, the show is of personal interest as most of the story occurs here in Iowa. The specific location named is Holden County, but I have no idea where that is supposed to be. There presently is no Holden County in Iowa and I don’t know that there ever was. All I could find is a reference to a Holden County School (Hamilton Township) in an obituary from Decatur County, which is along the southern border of Iowa (a county over from Appanoose where is located Centerville with an interesting history). Maybe there used to a Holden County that was absorbed by another county, a common event I’ve come across before in genealogical research, but in this case no historical map shows a Holden County ever having existed.

The probable fictional nature of the county aside, there is a reason the general location is relevant. Iowa is a state that exists in multiple overlapping border regions, between the Mississippi River and the Missouri River, between the Midwest and Far West, between the Upper Midwest and the Upper South. It is technically in the Midwest and typically perceived as the Heart of the Heartland, the precise location of Standard American English. The broad outlines of Iowa was defined according to Indian territory, such as how the northern border of Missouri originally formed. What became a boundary dispute later on almost led to violent conflict between Missouri and Iowa, based on the ideological conflict over slavery that would eventually develop into the Civil War.

Large parts of Iowa has more similarity to the Upper Midwest. It is distinct in being west of the Mississippi River, one of the last areas of refuge for many of what then were still independent Native American tribes and hence one of the last major battlegrounds to fight off Westward expansion. Iowa is the only state where a tribe collectively bought its own land, rather than staying on a federal reservation. As for southern Iowa, there is a clear Southern influence and you can occasionally hear a Southern accent (as found all across the lower edge of the Lower Midwest). That distinguishes it from northern Iowa with more of the northern European (German, Czech, and Scandinvian) culture shared with Minnesota and Wisconsin. And the more urbanized and industrialized Eastern Iowa has some New England influence from early settlers.

Maybe related to the show, southern Iowa had much more racial and ethnic diversity because of the immigrants attracted to mining towns. This led to greater conflict. I know that in Centerville, a town once as diverse as any big city, the Ku Klux Klan briefly used violence and manipulation to take control of the government before being ousted by the community. The area was important for the Underground Railroad, but it wasn’t a safe area to live for blacks until after the Civil War. In Damnation, some of the town residents are members of the Black Legion, the violent militant group that was an offshoot of the KKK (originally formed to guard Klan leaders). In the show, the Black Legion is essentially a fascist group that opposes left-wing politics and  labor organizing, which is historically accurate. The Klan and related groups in the North were more politically oriented, since the black population was fewer in number. In fact, the Klan tended to be found in counties where there were the least number of minorities (racial minorities, ethnic minorities, and religious minorities), as shown in how they couldn’t maintain control in diverse towns like Centerville.

One of the few blacks portrayed in the show is a woman working at a brothel. I supposed that would have been common, as blacks would have had a harder time finding work. In a scene at the brothel, there was one detail that seemed to potentially be historically inaccurate. A Pinkerton goon has all the prostitutes gathered and holds up something with words on it. He wants to find out which of them can read and it turns out that the black woman is the only literate prostitute working there. That seems unlikely. Iowa had a highly educated population early on, largely by design — as Phil Christman explains (On Being Midwestern: The Burden of Normality):

This is a part of the country where, the novelist Neal Stephenson observes, you can find small colleges “scattered about…at intervals of approximately one tank of gas.” Indeed, the grid-based zoning so often invoked to symbolize dullness actually attests to a love of education, he argues: 

People who often fly between the East and West Coasts of the United States will be familiar with the region, stretching roughly from the Ohio to the Platte, that, except in anomalous non-flat areas, is spanned by a Cartesian grid of roads. They may not be aware that the spacing between roads is exactly one mile. Unless they have a serious interest in nineteenth-century Midwestern cartography, they can’t possibly be expected to know that when those grids were laid out, a schoolhouse was platted at every other road intersection. In this way it was assured that no child in the Midwest would ever live more than √2 miles [i.e., about 1.4 miles] from a place where he or she could be educated.7

Minnesota Danish farmers were into Kierkegaard long before the rest of the country.8 They were descended, perhaps, from the pioneers Meridel LeSueur describes in her social history North Star Country: 

Simultaneously with building the sod shanties, breaking the prairie, schools were started, Athenaeums and debating and singing societies founded, poetry written and recited on winter evenings. The latest theories of the rights of man were discussed along with the making of a better breaking plow. Fourier, Marx, Rousseau, Darwin were discussed in covered wagons.9

If you’ve read Marilynne Robinson’s Gilead trilogy, you know that many of these schools were founded as centers of abolitionist resistance, or even as stops on the Underground Railroad.

The rural Midwest was always far different than the rural South. Iowa, in particular, was a bureaucratically planned society with the greatest proportion of developed land of any state in the country. The location of roads, railroads, towns, and schools was determined before most of the population arrived (similar to what China is now attempting with its mass building of cities out of nothing). The South, on the other hand, grew haphazardly and with little government intervention, such as seen with the the crazy zig-zagging of property lines and roads because of the metes-and-bounds system. This orderly design of Iowa fit the orderly culture of Northern European immigrants and New England settlers, contributing to an idealistic mentality about how society should operate (the Iowa college towns surrounded by farmland were built on the New England model).

The farmer revolts didn’t come out of nowhere. The immigrant populations in states like Iowa were already strongly community-focused and civic-minded. With them, they brought values of work ethic, systematic methods of farming, love of education, and much else. As an interesting example, Iowa was once known as the most musical state in the country because every town had local bands.

Unlike the stereotype, Iowans were obsessed with high culture. They saw themselves on the vanguard of Western Civilization. With so many public schools and colleges near every community, Iowans were well educated. The reason school children to this day have summers off was originally to allow farm children to be able to help on the farm while still being able to attend school. These Midwestern farm kids had relatively high rates of college attendance. And Iowa has long been known for having good schools, especially in the past. My mother has noted that so many Iowans she knows who are college-educated professionals went to small rural one-room schoolhouses.

Another factor is that Northern Europeans had a collectivist bent. They didn’t just love building public schools, public libraries, and public parks. They also formed civic institutions, farmer co-ops, credit unions, etc. They had a strong sense of solidarity that held their communities together. As the Iowa farmers stood together against the capitalist elites from the cities (the banksters, robber barons, and railroad tycoons), so did the German-American residents of Templeton, Iowa stood against Prohibition agents:

The most powerful weapon against oppression is community. This is attested to by the separate fates of a Templetonian like Joe Irlbeck and big city mobster like Al Capone. “Just as Al Capone had Eliot Ness, Templeton’s bootleggers had as their own enemy a respected Prohibition agent from the adjacent county named Benjamin Franklin Wilson. Wilson was ardent in his fight against alcohol, and he chased Irlbeck for over a decade. But Irlbeck was not Capone, and Templeton would not be ruled by violence like Chicago” (Kindle Locations 7-9 [Bryce T. Bauer, Gentlemen Bootleggers]). What ruled Templeton was most definitely not violence. Instead, it was a culture of trust. That is a weapon more powerful than all of Al Capone’s hired guns.

Damnation is a fair portrayal of this world that once existed. And it helps us to understand what destroyed that world — as vulture capitalists targeted small family farmers, controlling markets when possible or failing that sending in violent goons to create fear and havoc. That world survived in tatters for a few more decades, but government-subsidized big ag quickly took over. Still, small family farmers didn’t give up without a fight, as they were some of the last defenders of a pre-corporatist free market based on the ideal of meritorious hard work — the Jeffersonian ideal of the yeoman farmer with its vision of agrarian republicanism, in line with Paine’s brand of socially-minded and liberty-loving Anti-Federalism.

On a more prosaic level, one reviewer offers a critical observation. Mike Hale writes, from a New York Times piece (Review: ‘Damnation’ and the Sick Soul of 1930s America):

Any fidelity to the story’s supposed place and time is clearly incidental to Mr. Tost. He’s transposed the clichés of 19th-century Wyoming or South Dakota to 1930s Iowa, and doesn’t even get the look right — shot in Alberta, the locations look nothing like the Midwest.

Perhaps he was drawn to the contemporary echoes of the Depression-era material but wanted to give it some mock-Shakespearean, “Deadwood”-style dramatic heft. There’s a lot of literary straining going on — the characters are more familiar than you’d expect with the work of Wallace Stevens and Theodore Dreiser, and the sordid capitalism and anti-Communist fervor depicted in the story invoke Sinclair Lewis and Jack London.

I’m not sure why Mike Hale thinks the show doesn’t look like Iowa. He supposedly grew up in Iowa, but I don’t know which part. Anyone who has been in Western Iowa or even much of Eastern Iowa would recognize similar terrain. I doubt anything has been transposed.

Iowa is a young state and, as once being part of the Wild West, early on had a cowboy culture. Famous Hollywood cowboys came from the Midwest, specifically this region along the Upper Mississippi River — such as Ronald Reagan who was from western Illinois and worked in Iowa and John Anderson who was born in Western Illinois and was college-educated in Iowa, but also others who were born and raised in Iowa: John Wayne, Hank Worden, Neville Brand, etc (not just playing cowboys on the big screen but growing up around that cowboy culture). This isn’t just farm country with fields of corn and soy. Most of that is feed for animals, such as cattle. Iowa is part of the rodeo circuit and there is a strong horse culture around here. A short distance from where I live, a coworker of mine helps drive cattle down a highway every year to move them from one field to another.

But as I pointed out, none of this contradicts it also being a highly educated and literate population. I don’t know why Hale would think that certain writers would be unknown to Midwesterners, especially popular and populist writers like Jack London. As for Theodore Dreiser, he was a fellow German-American Midwesterner who wrote about rural life and was politically aligned with working class interests, including involvement in the defense of radicals like those Iowa farmers — the kind of writer one would expect Iowans, specifically working class activists, to be reading during the Great Depression era. That would be even more true for Sinclair Lewis who was from neighboring Minnesota, not to mention also writing popular books about Midwestern communities and radical criticisms of growing fascism — the same emergent fascism that threatened those Iowa farmers.

It’s interesting that an Iowan like Mike Hale would be so unaware of Iowa history. But maybe that is because he was born and spent much of his life outside of Iowa, specifically outside of the United States. His family isn’t from Iowa and so he has no roots here. I noticed that he tweeted that he “Was intrigued ‘Damnation’ is set in my state, Iowa. Didn’t expect the crucifixion, gun battles and frontier brothel”; to which someone responded that “If in Palo Alto, San Jose & NYC since ’77, IA hasn’t been ur state 4 awhile.” Besides, part of his childhood wasn’t even spent in Iowa but instead in Asia. And beyond that, many people simply don’t think he is that great of a critic (see Cultural Learnings, Variety, and Mediaite).

A better review is by Jeff Iblings over at The Tracking Board (Damnation Review: “Sam Riley’s Body”). The review is specifically about the first episode, but goes into greater detail:

Damnation is a new show on USA Networks set in the 1930’s during prohibition, the dust bowl era, and the social unrest during the unionization and strikes that accompanied the corruption of that time. It’s an intriguing look at a moment in American history when people began to wrest control away from a government bought and paid for by industrialists, only to have their movement squashed by the collusion of moneyed interests and the politicians they’d paid for. The series begins in Holden, Iowa as farmers have formed a blockade around the town so no more shipments of produce can reach the city. The powerful banker in town, who owns the newspaper and the Sheriff, has bribed the market in town to keep his food prices low, to price the famers out of making a profit on their crops so they’ll default on the loans he’s given them. A preacher in town fans the flames of the farmer’s unhappiness and gets them to revolt against the banker. Who is this mysterious preacher, and what does he have planned? […]

Damnation is clearly well researched, and the true-life stories it uses to flesh out its world are there to service the narrative, not overburden the show. 1930’s America was a desperate, bleak time, where moneyed interests controlled everything. The game was fixed back then, with politicians in the pocket of industrialists and wealthy bankers. The people had nothing more to give, since the wealthy had taken nearly everything from them. It’s a very relevant tale. Almost the same exact thing is going on again in present day America, which I would imagine, is one of the points of Damnation.

Iblings writes in another Damnation review of the second episode:

Tony Tost and his writers room delve into the history of the Great Depression in order to mine forgotten aspects of our political and social movements. It’s incredible how prescient much of the struggles of the farmers depicted still are problems today. Price fixing, bank negligence and dishonesty, politicians in the pockets of big business, the stifling of the labor movement when it’s needed most, and the inherent racism and protectionism of white Americans towards other races are all as topical today as they were in the 1930’s. It’s as if little has actually changed 100 years later. Damnation may be a historical television series, but it’s speaking to the America of today.

And about the third episode, he writes:

There are a few interesting moments I want to point out that really stuck with me. The first is the opening scene of a couple watching their kids playing baseball and taking great joy in it. When the wife goes into the shed to get the kids some cream soda, there are nooses hanging from the ceiling and Black Legion outfits hung up on the walls. The man then exclaims to his wife, “If this isn’t the American dream, I don’t know what is.” Damnation uses this banal setting, and these uneventful people to show how the American dream was an exclusionary ideal. They look like normal people you’d run into, but underneath this veneer are racist secrets. This prejudice was pervasive back then, but in Trump’s America this type of hatred and racism has become the norm once again. It was disgusting then, and it’s disgusting now.

What I like about the show is how it portrays the nature of populist politics during that historical era. The show begins in 1931, a moment of transition for American society in the waning days of Prohibition. The Great Depression followed decades of Populism and set the stage for the Progressivism that would follow. The next year Franklin Delano Roosevelt would be elected and later on re-elected twice more, the most popular president in US history.

What many forget about both Populism and Progressivism is the role that religion played, especially Evangelicalism. In the past, Evangelicals were often radical reformers in their promoting separation of church and state, abolitionism, women’s rights, and such. Think of the 1896 “Cross of Gold” speech given by William Jennings Bryan. This goes back to how Thomas Paine, the original American populist and progressive, used Christian language to advocate radical politics. Interestingly, as Paine was an anti-Christian deist, the leader of the farmers revolt is a guy falsely posing as an itinerant preacher, although he shows signs of genuine religious feeling such as sparing a man’s life when he sees the likeness of a cross marked on the floor near the man’s head. However one takes his persona of religiosity, the preaching of a revolutionary Jesus is perfectly in line with the political rhetoric of the period.

I also can’t help but appreciate how much it resonates with the present. The past, in a sense, always remains relevant — since as William Faulkner so deftly put it,  “The past isn’t over. It isn’t even past.” In a New York Post interview, the show’s creator Tony Tost was asked, “How relevant is the plot about the common man battling the establishment today?” And he replied that, “I wrote the first two episodes, like, three years ago, but contemporary history keeps making the show feel more and more relevant. I’m not necessarily trying to do an allegory about the present, but history is very cyclical. There’s some core elemental conflicts and issues that we keep returning to. In a way, the present day almost caught up.”

As with Hulu’s The Handmaid’s Tale and Amazon’s Man in the High Castle, Damnation has good timing. Such hard-hitting social commentary is important at times like these. And in the form of entertainment, it is more likely to have an impact.

* * *

State of Emergency: The Depression and the Plots to Create an American Dictatorship
by Nate Braden, Kindle Locations 510-571
(see Great Depression, Iowa, & Revolts)

“In September 1932 Fortune published a shocking profile of the effect Depression poverty was having on the American people. Titled “No One Has Starved” – in mocking reference to Herbert Hoover’s comment to that effect – Fortune essentially called the President a liar and explained why in a ten page article. Predicting eleven million unemployed by winter, its grim math figured these eleven million breadwinners were responsible for supporting another sixteen and a half million people, thus putting the total number of Americans without any income whatsoever at 27.5 million. Along with another 6.5 million who were underemployed, this meant 34 million citizens – nearly a third of the country’s population – lived below the poverty line. [1]

“Confidence was low that a Hoover reelection would bring any improvement in the country’s situation. He had ignored calls in 1929 to bail out banks after the stock market crashed on the grounds that the federal government had no business saving failed enterprises. With no liquidity in the financial markets, credit evaporated and deflation pushed prices and wages lower, laying waste to asset values. Two years passed before Hoover responded with the Reconstruction Finance Corporation, created to distribute $300 million in relief funds to state and local governments. It was too little, too late. The money would have been better served shoring up the banks three years earlier.

“With each cold, hungry winter that passed, political discussions grew more radical and less tolerant. Talk of revolution was more openly voiced. Harper’s, reflecting the opinion of East Coast intellectuals, pondered its likelihood and confidently asserted: “Revolutions are made, not by the weak, the unsuccessful, or the ignorant, but by the strong and the informed. They are processes, not merely of decay and destruction, but of advance and building. An old order does not disappear until a new order is ready to take its place.”[2]

“As this smug analysis was rolling off the presses, the weak, the unsuccessful, and the ignorant were already proving it wrong. Most people expected a revolt to start in the cities, but it was in the countryside, in Herbert Hoover’s home state no less, where men first took up arms against a system they had been raised to believe in but no longer did. On August 13, 1932, Milo Reno, the onetime head of the Iowa Farmer’s Union, led a group of five hundred men in an assault on Sioux City. They called it a “farm holiday,” but it was in fact an insurrection. Reno and his supporters blocked all ten highways into the city and confiscated every shipment of milk except those destined for hospitals, dumping it onto the side of the road or taking it into town to give away free. Fed up with getting only two cents for a quart of milk that cost them four cents to bring to market, the farmers were creating their own scarcities in an attempt to drive up prices.

“The insurgents enjoyed local support. Telephone operators gave advance warning of approaching lawmen, who were promptly ambushed and disarmed. When 55 men were arrested for picketing the highway to Omaha, a crowd of a thousand angry farmers descended on the county jail in Council Bluffs and forced their release. The uprising just happened to coincide with the Iowa National Guard’s annual drill in Des Moines, but Governor Dan Turner declined to use these troops to break up the disturbance, saying he had “faith in the good judgment of the farmers of Iowa that they will not resort to violence.”[3]

“The rebellion spread to Des Moines, Spencer, and Boone. Farmers in Nebraska, South Dakota, and Minnesota declared their own holidays. Milo Reno issued a press release vowing to continue “until the buying power of the farmer is restored – which can be done only by conceding him the right to cost of production, based on an American standard of existence.” Business institutions, he added, “whether great or small, important or humble, must suffer.” While advising his followers to obey the law and engage only in “peaceful picketing,” Reno issued this warning: “The day for pussyfooting and deception in the solution of the farmers’ problems is past, and the politicians who have juggled with the agricultural question and used it as a pawn with which to promote their own selfish interests can succeed no longer.”[4]

“Reno and his men had laid down their marker. Aware that the insurrectionists might call his bluff, the governor stopped short of issuing an ultimatum, but he kept his Guardsmen in Des Moines just in case. The showdown never came – a mysterious shotgun attack on one of Reno’s camps near Cherokee was enough to persuade him to call off the holiday – but others weren’t cowed by the violence. The same day Reno issued his press release, coal miners in neighboring Illinois went on strike after their pay was cut to five dollars a day. Fifteen thousand of them shut down shafts all over Franklin County, the state’s largest mining region, and took over the town of Coulterville for several hours, “exhausting provisions at the restaurant, swamping the telephone exchange with calls and choking roads and fields for a mile around” the New York Times reported. Governor Louis Emmerson ordered state troopers to take the town back. Wading into a hostile, sneering crowd who shouted “Cossacks!” at them, the police broke it up with pistols and clubs, putting eight miners in the hospital.

“The rebels were bloodied but unbowed. Vowing to march back in to coal country, strike leader Pat Ansbury told a journalist, “if we go back it must be with weapons. We can’t face the machine guns of those Franklin County jailbirds with our naked hands. Not a man in our midst had even a jackknife. When we go back we must have arms, organization and cooperation from the other side.” Shaking his head at the lost opportunity, he made sure the reporter hadn’t misunderstood him. “This policy of peaceful picketing is out from now on.” Reno conducted a similar post-mortem, acknowledging that his side may have lost the battle but would not lose the war: “You can no more stop this movement than you could stop the revolution. I mean the revolution of 1776.”[5]

“Not only were farmers burdened by low commodity prices, they were also swamped with high-interest mortgages and crushing taxes. In February 1933 Prudential Insurance, the nation’s largest land creditor, announced it would suspend foreclosures on the 37,000 farm titles it held, valued at $209 million. Mutual Benefit and Metropolitan Life followed suit, all of them finally coming to the conclusion that they couldn’t get blood from a rock.

“It was also getting very dangerous to be a repo man in the Midwest. When farms were foreclosed and the land put up for auction, neighbors of the dispossessed property holder would often show up at the sale, drive away any serious bidders, then buy the land for a few dollars and deed it back to the original owner. By this subterfuge a debt of $400 at one Ohio auction was settled for two dollars and fifteen cents. A mortgage broker in Illinois received only $4.90 for the $2,500 property he had put into receivership. An Oklahoma attorney who tried to serve foreclosure papers to a farm widow was promptly waylaid by her neighbors, including the county sheriff, driven ten miles out of town and dumped unceremoniously on the side of the road. A Kansas City realtor who had foreclosed on a 500-acre farm turned up with a bullet in his head, his killers never brought to justice. [6]”

State and Non-State Violence Compared

There is a certain kind of academic that simultaneously interests me and infuriates me. Jared Diamond, in The World Until Yesterday, is an example of this. He is knowledgeable guy and is able to communicate that knowledge in a coherent way. He makes many worthy observations and can be insightful. But there is also naivete that at times shows up in his writing. I get the sense that occasionally his conclusions preceded the evidence he shares. Also, he’ll point out the problems with the evidence and then, ignoring what he admitted, will treat that evidence as strongly supporting his biased preconceptions.

Despite my enjoyment of Diamond’s book, I was disappointed specifically in his discussion of violence and war (much of the rest of the book, though, is worthy and I recommend it). Among the intellectual elite, it seems fashionable right now to describe modern civilization as peaceful — that is fashionable among the main beneficiaries of modern civilization, not so much fashionable according to those who bear the brunt of the costs.

In Chapter 4, he asks, “Did traditional warfare increase, decrease, or remain unchanged upon European contact?” That is a good question. And as he makes clear, “This is not a straightforward question to decide, because if one believes that contact does affect the intensity of traditional warfare, then one will automatically distrust any account of it by an outside observer as having been influenced by the observer and not representing the pristine condition.” But he never answers the question. He simply assumes that that the evidence proves what he appears to have already believed.

I’m not saying he doesn’t take significant effort to make a case. He goes on to say, “However, the mass of archaeological evidence and oral accounts of war before European contact discussed above makes it far-fetched to maintain that people were traditionally peaceful until those evil Europeans arrived and messed things up.” The archaeological and oral evidence, like the anthropological evidence, is diverse. For example, in northern Europe, there is no evidence of large-scale warfare before the end of the Bronze Age when multiple collapsing civilizations created waves of refugees and marauders.

All the evidence shows us is that some non-state societies have been violent and others non-violent, no different than in comparing state societies. But we must admit, as Diamond does briefly, that contact and the rippling influences of contact across wide regions can lead to greater violence along with other alterations in the patterns of traditional culture and lifestyle. Before contact ever happens, most non-state societies have already been influenced by trade, disease, environmental destruction, invasive species, refugees, etc. That pre-contact indirect influences can last for generations or centuries prior to final contact, especially with non-state societies that were more secluded. And those secluded populations are the most likely to be studied as supposedly representative of uncontacted conditions.

We should be honest in admitting our vast ignorance. The problem is that, if Diamond fully admitted this, he would have little to write about on such topics or it would be a boring book with all of the endless qualifications (I personally like scholarly books filled with qualifications, but most people don’t). He is in the business of popular science and so speculation is the name of the game he is playing. Some of his speculations might not hold up to much scrutiny, not that the average reader will offer much scrutiny.

He continues to claim that, “the evidence of traditional warfare, whether based on direct observation or oral histories or archaeological evidence, is so overwhelming.” And so asks, “why is there still any debate about its importance?” What a silly question. We simply don’t know. He could be right, just as easily as he could be wrong. Speculations are dime a dozen. The same evidence can and regularly is made to conform to and confirm endless hypotheses that are mostly non-falsifiable. We don’t know and probably will never know. It’s like trying to use chimpanzees as a comparison for human nature, even though chimpanzees have for a long time been in a conflict zone with human encroachment, poaching, civil war, habitat loss, and ecosystem destabilization. No one knows what chimpanzees were like pre-contact. But we do know that bonobos that live across a major river in a less violent area express less violent behavior. Maybe there is a connection, not that Diamond is as likely to mention these kinds of details.

I do give him credit, though. He knows he is on shaky ground. In pointing out the problems he previously discussed, he writes that, “One reason is the real difficulties, which we have discussed, in evaluating traditional warfare under pre-contact or early-contact conditions. Warriors quickly discern that visiting anthropologists disapprove of war, and the warriors tend not to take anthropologists along on raids or allow them to photograph battles undisturbed: the filming opportunities available to the Harvard Peabody Expedition among the Dani were unique. Another reason is that the short-term effects of European contact on tribal war can work in either direction and have to be evaluated case by case with an open mind.” In between the lines, Jared Diamond makes clear that he can’t really know much of anything about earlier non-state warfare.

Even as he mentions some archaeological sites showing evidence of mass violence, he doesn’t clarify that these sites are a small percentage of archaeological sites, most of which don’t show mass violence. It’s not as if anyone is arguing mass violence never happened prior to civilization. The Noble Savage myth is not widely supported these days and so there is no point in his propping it up as a straw man to knock down.

From my perspective, it goes back to what comparisons one wishes to make. Non-state societies may or may not be more violent per capita. But that doesn’t change the reality that state societies cause more harm, as a total number. Consider one specific example of state warfare. The United States has been continuously at war since it was founded, which is to say not a year has gone by without war (against both state and non-state societies), and most of that has been wars of aggression. The US military, CIA covert operations, economic sanctions, etc surely has killed at least hundreds of millions of people in my lifetime — probably more people killed than all non-states combined throughout human existence.

Here is the real difference in violence between non-states and states. State violence is more hierarchically controlled and targeted in its destruction. Non-state societies, on the other hand, tend to spread the violence across entire populations. When a tribe goes to war, often the whole tribe is involved. So state societies are different in that usually only the poor and minorities, the oppressed and disadvantaged experience the harm. If you look at the specifically harmed populations in state societies, the mortality rate is probably higher than seen in non-state societies. The essential point is that this violence is concentrated and hidden.

Immensely larger numbers of people are the victims of modern state violence, overt violence and slow violence. But the academics who write about it never have to personally experience or directly observe these conditions of horror, suffering, and despair. Modern civilization is less violent for the liberal class, of which academics are members. That doesn’t say much about the rest of the global population. The permanent underclass lives in constant violence within their communities and from state governments, which leads to a different view on the matter.

To emphasize this bias, one could further note what Jared Diamond ignores or partly reports. In the section where he discusses violence, he briefly mentions the Piraha. He could have pointed out that they are a non-violent non-state society. They have no known history of warfare, capital punishment, abuse, homicide, or suicide — at least none has been observed or discovered through interviews. Does he write about this evidence that contradicts his views? Of course not. Instead, lacking any evidence of violence, he speculates about violence. Here is the passage from Chapter 2 (pp. 93-94):

“Among still another small group, Brazil’s Piraha Indians (Plate 11), social pressure to behave by the society’s norms and to settle disputes is applied by graded ostracism. That begins with excluding someone from food-sharing for a day, then for several days, then making the person live some distance away in the forest, deprived of normal trade and social exchanges. The most severe Piraha sanction is complete ostracism. For instance, a Piraha teen-ager named Tukaaga killed an Apurina Indian named Joaquim living nearby, and thereby exposed the Piraha to the risk of a retaliatory attack. Tukaaga was then forced to live apart from all other Piraha villages, and within a month he died under mysterious circumstances, supposedly of catching a cold, but possibly instead murdered by other Piraha who felt endangered by Tukaaga’s deed.”

Why did he add that unfounded speculation at the end? The only evidence he has is that their methods of social conformity are non-violent. Someone is simply ostracized. But that doesn’t fit his beliefs. So he assumes there must be some hidden violence that has never been discovered after generations of observers having lived among them. Even the earliest account of contact from centuries ago, as far as I know, indicates absolutely no evidence of violence. It makes one wonder how many more examples he ignores, dismisses, or twists to fit his preconceptions.

This reminds me of Julian Jaynes’ theory of bicameral societies. He noted that these Bronze Age societies were non-authoritarian, despite having high levels of social conformity. There is no evidence of these societies having written laws, courts, police forces, formal systems of punishment, and standing armies. Like non-state tribal societies, when they went to war, the whole population sometimes was mobilized. Bicameral societies were smaller, mostly city-states, and so still had elements of tribalism. But the point is that the enculturation process itself was powerful enough to enforce order without violence. That was only within a society, as war still happened between societies, although it was limited and usually only involved neighboring societies. I don’t think there is evidence of continual warfare. Yet when conflict erupted, it could lead to total war.

It’s hard to compare either tribes or ancient city-states to modern nation-states. Their social orders and how they maintained them are far different. And the violence involved is of a vastly disparate scale. Besides, I wouldn’t take the past half century of relative peace in the Western world as being representative of modern civilization. In this new century, we might see billions of deaths from all combined forms of violence. And the centuries earlier were some of the bloodiest and destructive ever recorded. Imperialism and colonialism, along with the legacy systems of neo-imperialism and neo-colonialism, have caused and contributed to the genocide or cultural destruction of probably hundreds of thousands of societies worldwide, in most cases with all evidence of their existence having disappeared. This wholesale massacre has led to a dearth of societies left remaining with which to make comparisons. The survivors living in isolated niches may not be representative of the societal diversity that once existed.

Anyway, the variance of violence and war casualty rates likely is greater in comparing societies of the same kind than in comparing societies of different kinds. As the nearby bonobos are more peaceful than chimpanzees, the Piraha are more peaceful than the Yanomami who live in the same region — as Canada is more peaceful than the US. That might be important to explain and a lot more interesting. But this more incisive analysis wouldn’t fit Western propaganda, specifically the neo-imperial narrative of Pax Americana. From Pax Hispanica to Pax Britannica to Pax Americana, quite possibly billions of combatants have died in wars and billions more of innocents as casualties. That is neither a small percentage nor a small total number, if anyone is genuinely concerned about body counts.

* * *

Rebutting Jared Diamond’s Savage Portrait
by Paul Sillitoe & Mako John Kuwimb, iMediaEthics

Why Does Jared Diamond Make Anthropologists So Mad?
by Barbara J. King, NPR

In a beautifully written piece for The Guardian, Wade Davis says that Diamond’s “shallowness” is what “drives anthropologists to distraction.” For Davis, geographer Diamond doesn’t grasp that “cultures reside in the realm of ideas, and are not simply or exclusively the consequences of climatic and environmental imperatives.”

Rex Golub at Savage Minds slams the book for “a profound lack of thought about what it would mean to study human diversity and how to make sense of cultural phenomena.” In a fit of vexed humor, the Wenner-Gren Foundation for anthropological research tweeted Golub’s post along with this comment: “@savageminds once again does the yeoman’s work of exploring Jared Diamond’s new book so the rest of us don’t have to.”

This biting response isn’t new; see Jason Antrosio’s post from last year in which he calls Diamond’s Pulitzer Prize-winning Guns, Germs, and Steel a “one-note riff,” even “academic porn” that should not be taught in introductory anthropology courses.

Now, in no way do I want to be the anthropologist who defends Diamond because she just doesn’t “get” what worries all the cool-kid anthropologists about his work. I’ve learned from their concerns; I’m not dismissing them.

In point of fact, I was startled at this passage on the jacket of The World Until Yesterday: “While the gulf that divides us from our primitive ancestors may seem unbridgably wide, we can glimpse most of our former lifestyle in those largely traditional societies that still exist or were recently in existence.” This statement turns small-scale societies into living fossils, the human equivalent of ancient insects hardened in amber. That’s nonsense, of course.

Lest we think to blame a publicist (rather than the author) for that lapse, consider the text itself. Near the start, Diamond offers a chronology: until about 11,000 years ago, all people lived off the land, without farming or domesticated animals. Only around 5,400 years ago did the first state emerge, with its dense population, labor specialization and power hierarchy. Then Diamond fatally overlays that past onto the present: “Traditional societies retain features of how all of our ancestors lived for tens of thousands of years, until virtually yesterday.” Ugh.

Another problem, one I haven’t seen mentioned elsewhere, bothers me just as much. When Diamond urges his WEIRD readers to learn from the lifeways of people in small-scale societies, he concludes: “We ourselves are the only ones who created our new lifestyles, so it’s completely in our power to change them.” Can he really be so unaware of the privilege that allows him to assert — or think — such a thing? Too many people living lives of poverty within industrialized nations do not have it “completely in their power” to change their lives, to say the least.

Patterns of Culture by Ruth Benedict (1934) wins Jared Diamond (2012)
by Jason Antrosio, Living Anthropologically

Compare to Jared Diamond. Diamond has of course acquired some fame for arguing against biological determinism, and his Race Without Color was once a staple for challenging simplistic tales of biological race. But by the 1990s, Diamond simply echoes perceived liberal wisdom. Benedict and Weltfish’s Races of Mankind was banned by the Army as Communist propaganda, and Weltfish faced persecution from McCarthyism (Micaela di Leonardo, Exotics at Home 1998:196,224; see also this Jon Marks comment on Gene Weltfish). Boas and Benedict swam against the current of the time, when backlash could be brutal. In contrast, Diamond’s claims on race and IQ have mostly been anecdotal. They have never been taken seriously by those who call themselves “race realists” (see Jared Diamond won’t beat Mitt Romney). Diamond has never responded scientifically to the re-assertion of race from sources like “A Family Tree in Every Gene,” and he helped propagate a medical myth about racial differences in hypertension.

And, of course, although Guns, Germs, and Steel has been falsely branded as environmental or geographical determinism, there is no doubt that Diamond leans heavily on agriculture and geography as explanatory causes for differential success. […]

Compare again Jared Diamond. Diamond has accused anthropologists of falsely romanticizing others, but by subtitling his book What Can We Learn from Traditional Societies, Diamond engages in more than just politically-correct euphemism. When most people think of a “traditional society,” they are thinking of agrarian peasant societies or artisan handicrafts. Diamond, however, is referring mainly to what we might term tribal societies, or hunters and gatherers with some horticulture. Curiously, for Diamond the dividing line between the yesterday of traditional and the today of the presumably modern was somewhere around 5,000-6,000 years ago (see The Colbert Report). As John McCreery points out:

Why, I must ask, is the category “traditional societies” limited to groups like Inuit, Amazonian Indians, San people and Melanesians, when the brute fact of the matter is that the vast majority of people who have lived in “traditional” societies have been peasants living in traditional agricultural civilizations over the past several thousand years since the first cities appeared in places like the valleys of the Nile, the Tigris-Euphrates, the Ganges, the Yellow River, etc.? Talk about a big blind spot.

Benedict draws on the work of others, like Reo Fortune in Dobu and Franz Boas with the Kwakiutl. Her own ethnographic experience was limited. But unlike Diamond, Benedict was working through the best ethnographic work available. Diamond, in contrast, splays us with a story from Allan Holmberg, which then gets into the New York Times, courtesy of David Brooks. Compare bestselling author Charles Mann on “Holmberg’s Mistake” (the first chapter of his 1491: New Revelations of the Americas Before Columbus):

The wandering people Holmberg traveled with in the forest had been hiding from their abusers. At some risk to himself, Holmberg tried to help them, but he never fully grasped that the people he saw as remnants from the Paleolithic Age were actually the persecuted survivors of a recently shattered culture. It was as if he had come across refugees from a Nazi concentration camp, and concluded that they belonged to a culture that had always been barefoot and starving. (Mann 2005:10)

As for Diamond’s approach to comparing different groups: “Despite claims that Diamond’s book demonstrates incredible erudition what we see in this prologue is a profound lack of thought about what it would mean to study human diversity and how to make sense of cultural phenomenon” (Alex Golub, How can we explain human variation?).

Finally there is the must-read review Savaging Primitives: Why Jared Diamond’s ‘The World Until Yesterday’ Is Completely Wrong by Stephen Corry, Director of Survival International:

Diamond adds his voice to a very influential sector of American academia which is, naively or not, striving to bring back out-of-date caricatures of tribal peoples. These erudite and polymath academics claim scientific proof for their damaging theories and political views (as did respected eugenicists once). In my own, humbler, opinion, and experience, this is both completely wrong–both factually and morally–and extremely dangerous. The principal cause of the destruction of tribal peoples is the imposition of nation states. This does not save them; it kills them.

[…] Indeed, Jared Diamond has been praised for his writing, for making science popular and palatable. Others have been less convinced. As David Brooks reviews:

Diamond’s knowledge and insights are still awesome, but alas, that vividness rarely comes across on the page. . . . Diamond’s writing is curiously impersonal. We rarely get to hear the people in traditional societies speak for themselves. We don’t get to meet any in depth. We don’t get to know what their stories are, what the contents of their religions are, how they conceive of individual selfhood or what they think of us. In this book, geographic and environmental features play a much more important role in shaping life than anything an individual person thinks or feels. The people Diamond describes seem immersed in the collective. We generally don’t see them exercising much individual agency. (Tribal Lessons; of course, Brooks may be smarting from reviews that called his book The Dumbest Story Ever Told)

[…] In many ways, Ruth Benedict does exactly what Wade Davis wanted Jared Diamond to do–rather than providing a how-to manual of “tips we can learn,” to really investigate the existence of other possibilities:

The voices of traditional societies ultimately matter because they can still remind us that there are indeed alternatives, other ways of orienting human beings in social, spiritual and ecological space. This is not to suggest naively that we abandon everything and attempt to mimic the ways of non-industrial societies, or that any culture be asked to forfeit its right to benefit from the genius of technology. It is rather to draw inspiration and comfort from the fact that the path we have taken is not the only one available, that our destiny therefore is not indelibly written in a set of choices that demonstrably and scientifically have proven not to be wise. By their very existence the diverse cultures of the world bear witness to the folly of those who say that we cannot change, as we all know we must, the fundamental manner in which we inhabit this planet. (Wade Davis review of Jared Diamond; and perhaps one of the best contemporary versions of this project is Wade Davis, The Wayfinders: Why Ancient Wisdom Matters in the Modern World)

[…] This history reveals the major theme missing from both Benedict’s Patterns of Culture and especially missing from Diamond–an anthropology of interconnection. That as Eric Wolf described in Europe and the People Without History peoples once called primitive–now perhaps more politely termed tribal or traditional–were part of a co-production with Western colonialism. This connection and co-production had already been in process long before anthropologists arrived on the scene. Put differently, could the Dobuan reputation for being infernally nasty savages have anything to do with the white recruiters of indentured labour, which Benedict mentions (1934:130) but then ignores? Could the revving up of the Kwakiutl potlatch and megalomaniac gamuts have anything to do with the fur trade?

The Collapse Of Jared Diamond
by Louis Proyect, Swans Commentary

In general, the approach of the authors is to put the ostensible collapse into historical context, something that is utterly lacking in Diamond’s treatment. One of the more impressive record-correcting exercises is Terry L. Hunt and Carl P. Lipo’s Ecological Catastrophe, Collapse, and the Myth of “Ecocide” on Rapa Nui (Easter Island). In Collapse, Diamond judged Easter Island as one of the more egregious examples of “ecocide” in human history, a product of the folly of the island’s rulers whose decision to construct huge statues led to deforestation and collapse. By chopping down huge palm trees that were used to transport the stones used in statue construction, the islanders were effectively sealing their doom. Not only did the settlers chop down trees, they hunted the native fauna to extinction. The net result was a loss of habitat that led to a steep population decline.

Diamond was not the first observer to call attention to deforestation on Easter Island. In 1786, a French explorer named La Pérouse also attributed the loss of habitat to the “imprudence of their ancestors for their present unfortunate situation.”

Referring to research about Easter Island by scientists equipped with the latest technologies, the authors maintain that the deforestation had nothing to do with transporting statues. Instead, it was an accident of nature related to the arrival of rats in the canoes of the earliest settlers. Given the lack of native predators, the rats had a field day and consumed the palm nuts until the trees were no longer reproducing themselves at a sustainable rate. The settlers also chopped down trees to make a space for agriculture, but the idea that giant statues had anything to do with the island’s collapse is as much of a fiction as Diamond’s New Yorker article.

Unfortunately, Diamond is much more interested in ecocide than genocide. If people interested him half as much as palm trees, he might have said a word or two about the precipitous decline in population that occurred after the island was discovered by Europeans in 1722. Indeed, despite deforestation there is evidence that the island’s population grew between 1250 and 1650, the period when deforestation was taking place — leaving aside the question of its cause. As was the case when Europeans arrived in the New World, a native population was unable to resist diseases such as smallpox and died in massive numbers. Of course, Diamond would approach such a disaster with his customary Olympian detachment and write it off as an accident of history.

While all the articles pretty much follow the narrowly circumscribed path as the one on Easter Island, there is one that adopts the Grand Narrative that Jared Diamond has made a specialty of and beats him at his own game. I am referring to the final article, Sustainable Survival by J.R. McNeill, who describes himself in a footnote thusly: “Unlike most historians, I have no real geographic specialization and prefer — like Jared Diamond — to hunt for large patterns in the human past.”

And one of those “large patterns” ignored by Diamond is colonialism. The greatest flaw in Collapse is that it does not bother to look at the impact of one country on another. By treating countries in isolation from one another, it becomes much easier to turn the “losers” into examples of individual failing. So when Haiti is victimized throughout the 19th century for having the temerity to break with slavery, this hardly enters into Diamond’s moral calculus.

Compassion Sets Humans Apart
by Penny Spikins, Sapiens

There are, perhaps surprisingly, only two known cases of likely interpersonal violence in the archaic species most closely related to us, Neanderthals. That’s out of a total of about 30 near-complete skeletons and 300 partial Neanderthal finds. One—a young adult living in what is now St. Césaire, France, some 36,000 years ago—had the front of his or her skull bashed in. The other, a Neanderthal found in Shanidar Cave in present-day Iraq, was stabbed in the ribs between 45,000 and 35,000 years ago, perhaps by a projectile point shot by a modern human.

The earliest possible evidence of what might be considered warfare or feuding doesn’t show up until some 13,000 years ago at a cemetery in the Nile Valley called Jebel Sahaba, where many of the roughly 60 Homo sapiens individuals appear to have died a violent death.

Evidence of human care, on the other hand, goes back at least 1.5 million years—to long before humans were anatomically modern. A Homo ergaster female from Koobi Fora in Kenya, dated to about 1.6 million years ago, survived several weeks despite a toxic overaccumulation of vitamin A. She must have been given food and water, and protected from predators, to live long enough for this disease to leave a record in her bones.

Such evidence becomes even more notable by half a million years ago. At Sima de los Huesos (Pit of Bones), a site in Spain occupied by ancestors of Neanderthals, three of 28 individuals found in one pit had severe pathology—a girl with a deformed head, a man who was deaf, and an elderly man with a damaged pelvis—but they all lived for long periods of time despite their conditions, indicating that they were cared for. At the same site in Shanidar where a Neanderthal was found stabbed, researchers discovered another skeleton who was blind in one eye and had a withered arm and leg as well as hearing loss, which would have made it extremely hard or impossible to forage for food and survive. His bones show he survived for 15 to 20 years after injury.

At a site in modern-day Vietnam called Man Bac, which dates to around 3,500 years ago, a man with almost complete paralysis and frail bones was looked after by others for over a decade; he must have received care that would be difficult to provide even today.

All of these acts of caring lasted for weeks, months, or years, as opposed to a single moment of violence.

Violence, Okinawa, and the ‘Pax Americana’
by John W. Dower, The Asia-Pacific Journal

In American academic circles, several influential recent books argue that violence declined significantly during the Cold War, and even more precipitously after the demise of the Soviet Union in 1991. This reinforces what supporters of US strategic policy including Japan’s conservative leaders always have claimed. Since World War II, they contend, the militarized Pax Americana, including nuclear deterrence, has ensured the decline of global violence.

I see the unfolding of the postwar decades through a darker lens.

No one can say with any certainty how many people were killed in World War II. Apart from the United States, catastrophe and chaos prevailed in almost every country caught in the war. Beyond this, even today criteria for identifying and quantifying war-related deaths vary greatly. Thus, World War II mortality estimates range from an implausible low of 50 million military and civilian fatalities worldwide to as many as 80 million. The Soviet Union, followed by China, suffered by far the greatest number of these deaths.

Only when this slaughter is taken as a baseline does it make sense to argue that the decades since World War II have been relatively non-violent.

The misleading euphemism of a “Cold War” extending from 1945 to 1991 helps reinforce the decline-of-violence argument. These decades were “cold” only to the extent that, unlike World War II, no armed conflict took place pitting the major powers directly against one another. Apart from this, these were years of mayhem and terror of every imaginable sort, including genocides, civil wars, tribal and ethnic conflicts, attempts by major powers to suppress anti-colonial wars of liberation, and mass deaths deriving from domestic political policies (as in China and the Soviet Union).

In pro-American propaganda, Washington’s strategic and diplomatic policies during these turbulent years and continuing to the present day have been devoted to preserving peace, defending freedom and the rule of law, promoting democratic values, and ensuring the security of its friends and allies.

What this benign picture ignores is the grievous harm as well as plain folly of much postwar US policy. This extends to engaging in atrocious war conduct, initiating never-ending arms races, supporting illiberal authoritarian regimes, and contributing to instability and humanitarian crises in many part of the world.

Such destructive behavior was taken to new levels in the wake of the September 11, 2001, attack on the World Trade Center and Pentagon by nineteen Islamist hijackers. America’s heavy-handed military response has contributed immeasurably to the proliferation of global terrorist organizations, the destabilization of the Greater Middle East, and a flood of refugees and internally displaced persons unprecedented since World War II.

Afghanistan and Iraq, invaded following September 11, remain shattered and in turmoil. Neighboring countries are wracked with terror and insurrection. In 2016, the last year of Barack Obama’s presidency, the US military engaged in bombing and air strikes in no less than seven countries (Afghanistan, Iraq, Pakistan, Somalia, Yemen, Libya, and Syria). At the same time, elite US “special forces” conducted largely clandestine operations in an astonishing total of around 140 countries–amounting to almost three-quarters of all the nations in the world.

Overarching all this, like a giant cage, is America’s empire of overseas military bases. The historical core of these bases in Germany, Japan, and South Korea dates back to after World War II and the Korean War (1950-1953), but the cage as a whole spans the globe and is constantly being expanded or contracted. The long-established bases tend to be huge. Newer installations are sometimes small and ephemeral. (The latter are known as “lily pad” facilities, and now exist in around 40 countries.) The total number of US bases presently is around 800.

Okinawa has exemplified important features of this vast militarized domain since its beginnings in 1945. Current plans to relocate US facilities to new sites like Henoko, or to expand to remote islands like Yonaguni, Ishigaki, and Miyako in collaboration with Japanese Self Defense Forces, reflect the constant presence but ever changing contours of the imperium. […]

These military failures are illuminating. They remind us that with but a few exceptions (most notably the short Gulf War against Iraq in 1991), the postwar US military has never enjoyed the sort of overwhelming victory it experienced in World War II. The “war on terror” that followed September 11 and has dragged on to the present day is not unusual apart from its seemingly endless duration. On the contrary, it conforms to this larger pattern of postwar US military miscalculation and failure.

These failures also tell us a great deal about America’s infatuation with brute force, and the double standards that accompany this. In both wars, victory proved elusive in spite of the fact that the United States unleashed devastation from the air greater than anything ever seen before, short of using nuclear weapons.

This usually comes as a surprise even to people who are knowledgeable about the strategic bombing of Germany and Japan in World War II. The total tonnage of bombs dropped on Korea was four times greater than the tonnage dropped on Japan in the US air raids of 1945, and destroyed most of North Korea’s major cities and thousands of its villages. The tonnage dropped on the three countries of Indochina was forty times greater than the tonnage dropped on Japan. The death tolls in both Korea and Indochina ran into the millions.

Here is where double standards enter the picture.

This routine US targeting of civilian populations between the 1940s and early 1970s amounted to state-sanctioned terror bombing aimed at destroying enemy morale. Although such frank labeling can be found in internal documents, it usually has been taboo in pro-American public commentary. After September 11, in any case, these precedents were thoroughly scrubbed from memory.

“Terror bombing” has been redefined to now mean attacks by “non-state actors” motivated primarily by Islamist fundamentalism. “Civilized” nations and cultures, the story goes, do not engage in such atrocious behavior. […]

Nuclear weapons were removed from Okinawa after 1972, and the former US and Soviet nuclear arsenals have been substantially reduced since the collapse of the USSR. Nonetheless, today’s US and Russian arsenals are still capable of destroying the world many times over, and US nuclear strategy still explicitly targets a considerable range of potential adversaries. (In 2001, under President George W. Bush, these included China, Russia, Iraq, Iran, North Korea, Syria, and Libya.)

Nuclear proliferation has spread to nine nations, and over forty other countries including Japan remain what experts call “nuclear capable states.” When Barack Obama became president in 2009, there were high hopes he might lead the way to eliminating nuclear weapons entirely. Instead, before leaving office his administration adopted an alarming policy of “nuclear modernization” that can only stimulate other nuclear nations to follow suit.

There are dynamics at work here that go beyond rational responses to perceived threats. Where the United States is concerned, obsession with absolute military supremacy is inherent in the DNA of the postwar state. After the Cold War ended, US strategic planners sometimes referred to this as the necessity of maintaining “technological asymmetry.” Beginning in the mid 1990s, the Joint Chiefs of Staff reformulated their mission as maintaining “full spectrum dominance.”

This envisioned domination now extends beyond the traditional domains of land, sea, and air power, the Joint Chiefs emphasized, to include space and cyberspace as well.

 

Happy Birthday, Charter of the Forest!

This is the birthday of an important historical document. Eight hundred years ago on this day, 6 November 1217, the Charter of the Forest was first issued. Along with the closely related Magna Carta (1215), it formally and legally established a precedent in the English-speaking world. It was one of the foundations of the English Commons and gave new standing to the commoner as a citizen. And it was one of the precursors for the Rights of Englishmen, Lockean land rights, and the United States Bill of Rights.

The Charter of the Forest didn’t only mean a defense of the rights of commoners for, in doing so, it was a challenge to the rights of rulers. This was a sign of weakening justification and privileges of monarchy. And such a challenge would feed into emerging republicanism during the Renaissance and Reformation, coming into bloom during the early modern era. This populist tradition helped to incite the Peasants’ Revolt, the English Civil War, and the American Revolution. The rights of the Commons inspired the Levellers to fight for popular sovereignty, extended suffrage, equality before the law, and religious tolerance. And it took even more extreme form in the primitive communism of the Diggers.

Such a charter was one of the expressions of what would later develop into liberal and radical thought during the enlightenment age and the early modern revolutions. Democracy, as we know it, would form out of this. Through the American founders and revolutionary pamphleteers, these legacies and ideas would shape the new world that followed. The ideas would remain potent enough to divide the country during the American Civil War.

We should take seriously what it means that these core Anglo-American traditions have been eroded and their ancient origins largely forgotten. It’s a loss of freedom and a loss of identity.

Who owns the earth?
by Antonia Malchik, Adeon

The commons are just what they sound like: land, waterways, forests, air. The natural resources of our planet that make life possible. Societies throughout history have continually relied on varying systems of commons usage that strove to distribute essential resources equitably, like grazing and agricultural land, clean water for drinking and washing, foraged food, and wood for fuel and building. As far back as 555 CE the commons were written into Roman law, which stated outright that certain resources belonged to all, never owned by a few: ‘By the law of nature these things are common to mankind – the air, running water, the sea and consequently the shores of the sea.’

The power of this tradition is difficult to explain but even more difficult to overstate, and its practice echoes throughout Western history. The Magna Carta, agreed to in 1215 by England’s King John at the insistence of his barons, protected those nobles from losing their lands at the whim of whatever sovereign they were serving. It also laid down the right to a trial by one’s peers, among other individual rights, and is the document widely cited as the foundation of modern democracy.

What is less well-known is the Charter of the Forest, which was agreed to two years later by the regent for Henry III, King John having died in 1216. With the Charter, ‘management of common resources moves from the king’s arbitrary rule’, says Carolyn Harris, a Canadian scholar of the Magna Carta, ‘to the common good’. The Charter granted what are called subsistence rights, the right that ‘[e]very free man may henceforth without being prosecuted make in his wood or in land he has in the forest a mill, a preserve, a pond, a marl-pit, a ditch, or arable outside the covert in arable land, on condition that it does not harm any neighbour’. Included was the permission to graze animals and gather the food and fuel that one needed to live.

These rights went over to America intact and informed that country’s founding fathers as they developed their own system of laws, with a greater emphasis on the rights of commoners to own enough land to live independently. (That this land belonged to the native people who already lived there didn’t factor much into their reasoning.) For Thomas Jefferson, according to law professor Eric T Freyfogle in his 2003 book The Land We Share, ‘[t]he right of property chiefly had to do with a man’s ability to acquire land for subsistence living, at little or no cost: It was a right of opportunity, a right to gain land, not a right to hoard it or to resist public demands that owners act responsibly.’

Benjamin Franklin, too, believed that any property not required for subsistence was ‘the property of the public, who by their laws, have created it, and who may therefore by other laws dispose of it, whenever the welfare of the public shall demand such disposition’. The point was for an individual or family to gain the means for an independent life, not to grow rich from land ownership or to take the resources of the commons out of the public realm. This idea extended to limiting trespassing laws. Hunting on another’s unenclosed land was perfectly legal, as was – in keeping with the Charter of the Forest – foraging.

The land itself, not just the resources it contained, was part of the commons. Consider the implications of this thinking for our times: if access to the means for self-sustenance were truly the right of all, if both public resources and public land could never be taken away or sold, then how much power could the wealthy, a government, or corporations have over everyday human lives?

The idea of the commons isn’t exclusive to English and American history. In Russia, since at least the 1400s and continuing in various forms until the Bolshevik revolution of 1917, land was managed under the mir system, or ‘joint responsibility’, which ensured that everyone had land and resources enough – including tools – to support themselves and their families. Strips of land were broken up and redistributed every so often to reflect changing family needs. Land belonged to the mir as a whole. It couldn’t be taken away or sold. In Ireland from before the 7th century (when they were first written down) to the 17th, Brehon laws served a similar purpose, with entire septs or clans owning and distributing land until invading English landlords carved up the landscape, stripped its residents of ancestral systems and tenancy rights, and established their estates with suppression and violence. The Scottish historian Andro Linklater examines variations on these collective ownership systems in detail in his 2013 book, Owning the Earth: the adat in Iban, crofting in Scotland, the Maori ways of use in New Zealand, peasant systems in India and China and in several Islamic states, and of course on the North American continent before European invasion and settlement.

But the commons are not relics of dusty history. The Kyrgyz Republic once had a successful system of grazing that benefited both herdsmen and the land. Shattered during Soviet times in favour of intensive production, the grazing commons is slowly being reinstated after passage of a Pasture Law in 2009, replacing a system of private leases with public use rights that revolve around ecological knowledge and are determined by local communities. In Fiji, villages have responded to pressures from overfishing and climate change by adopting an older system of temporary bans on fishing called tabu. An article in the science magazine Nautilus describes the formation of locally managed Marine Protected Areas that use ancient traditions of the commons, and modern scientific understanding, to adapt these communal fishing rights and bans to the changing needs of the ecosystem.

Preservation of the commons has not, then, been completely forgotten. But it has come close. The commons are, essentially, antithetical both to capitalism and to limitless private profit, and have therefore been denigrated and abandoned in many parts of the world for nearly two centuries.

Attributes of Thomas Paine

“Paine’s The Age of Reason: I am willing you should call this the Age of Frivolity, as you do, and would not object if you had named it the Age of Folly, Vice, Frenzy, Brutality, Daemons, Bonaparte, Tom Paine, or the Age of the Burning Brand from the Bottomless Pit, or anything but the Age of Reason. I know not whether any man in the world has had more influence on its inhabitants or affairs or the last thirty years than Tom Paine. There can no severer satyr on the age. For such a mongrel between pig and puppy, begotten by a wild boar on a bitch wolf, never before in any age of the world was suffered by the poltroonery of mankind, to run through such a career of mischief. Call it then the Age of Paine.”
~ John Adams

The Age of Paine, out of which the modern world was born. And being reminded of this, my mind ever drifts back to the hope for a new Age of Paine. No one can doubt that Thomas Paine was ahead of his time. But it becomes ever more apparent that, all these centuries later, he is also ahead of our time. We need less John Adams, more Thomas Paine.

So, who exactly was Thomas Paine? What kind of person was he? What did he embody and express?

First of all, Paine was a working class bloke who aspired for something greater. But he didn’t start his life with grand visions. He would have been happy with a good job and a family, if life had worked out for him, if not for loss after loss. He sought family life years before self-improvement became a central focus. He sought self-improvement years before he turned to reform. And he sought reform years before revolution ever crossed his mind. It wasn’t until middle age that he found himself carried ashore to the American colonies, impoverished and near death. He was a sensitive soul in a harsh world. There was little justice to be found other than what one fought for. So, he finally decided to fight.

That is where his personality comes in. He was a kind and devoted friend, but also he could be a fierce critic and unrelenting enemy. He took betrayal as a personal attack, even if it was limited to betraying his principles. He was an ornery asshole with a bad attitude, having seen the dark side of life. In time, he would become a morally righteous troublemaker and rabble-rouser, a highly effective disturber of the peace and a serious threat to the status quo. To the targets of his sharp tongue, he was opinionated, arrogant, and haughty. He was tolerant of much but not of bullshit, no matter its source.

Paine was a social justice warrior with heavy emphasis on the latter part. He didn’t  back down from fights and he was a physically capable man, not afraid to be in a literal battle. He considered a pen and sword to be equally powerful, depending on circumstances, and he took up both when necessary. If he were alive today, he would be punching Nazis and writing inspiring words for others to join him in the fight for freedom. The likes of Adams and Burke, for all their complaints, never suggested Paine was a coward or a hypocrite. He stated in no uncertain terms what he believed was worth fighting for and then, unlike Adams and Burke, he fought for it. Without the slightest doubt, he had the courage of his convictions.

Yet he was never a dogmatic ideologue. He was always focused on what would pragmatically improve the lives of average people. He didn’t allow himself to be carried away by ideological zeal — demonstrated by his offering a moderating voice for democratic principles and process even as the French Revolution took a dark turn, which landed him in prison awaiting the guillotine. Injustice from reactionaries posing as revolutionaries, to his mind, was as dangerous as injustice from monarchs, aristocrats, and plutocrats.

Most of all, Paine was a seeker and speaker of truth. He refused to be silenced, refused to back down, and refused to be kept in his place. He dared to question and doubt, even if it meant knocking over and slaughtering sacred cows. His first concern wasn’t in winning popularity contests. He had no aspiration to be like the self-styled noble aristocracy, much less a respectable leader of the ruling elite. He would befriend the powerful when they were willing to be allies and then attack the very same people when they proved themselves to be false and unworthy. His opinions didn’t sway with the wind, but his understanding did develop over time. He became ever more clear in what he saw as required to create and maintain a truly free society.

He is known for having been a writer. But he had a varied history before he became a newspaperman and a muckraking journalist which eventually led to his revolutionary pamphleteering. He held many normal jobs in the early decades of his life, a staymaker by training who was a privateer for a short period, then a tax collector, and did odd jobs. Like anyone else, he was simply trying to make his way in the world. No one is born a revolutionary. It took most of his life to become who he is now remembered for.

So what kind of person did he become? He was a populist no doubt, a man of the people, what some would unfairly dismiss as a demagogue. He was simply acting and speaking from what he personally experienced and understood about the world. That led him to develop into a freedom fighter — anti-elitist, anti-authoritarian, and anti-fundamentalist. More basically, he was a left-liberal, social democrat, economic progressive, and civil libertarian. His political commitments expressed themselves in many ways, from abolitionism to feminism, from universal suffrage to free speech rights, from fighting war profiteering to demanding a basic income.

Still, it doesn’t seem that Paine saw himself as a political being. He preferred to focus on other things, if world events had allowed him. This was explained by Edward G. Gray in Tom Paine’s Iron Bridge (pp. 3-5):

“OF THE MANY ESSAYS Thomas Paine wrote, among the least known is “The Construction of Iron Bridges.” This brief history of Paine’s architectural career, written in 1803, was of no particular interest to his political followers, nor has it been to his many subsequent biographers. The essay after all has little to do with the radical critique of hereditary monarchy or the cult of natural rights for which Paine has been so justly celebrated. But it is a window into his world. Many of the luminaries in Paine’s circle were inventors. Paine’s friend Benjamin Franklin devised bifocals, the lightning rod, the glass armonica, and countless other devices. Another friend, Thomas Jefferson, invented an improved plow and a mechanism for copying letters. Some revolutionary leaders not known for their inventions devoted time to building things. George Washington often seems to have lavished as much attention on his house at Mount Vernon as on matters of state. From this vantage, Paine seems no different.

“But Paine was different. Unlike so many of his American contemporaries, Paine had a narrow field of interests. He never showed any passion for art or philosophy. He claimed repeatedly to have learned little from books. He did have other mechanical interests. He attempted to invent a smokeless candle and later in life he contemplated a perpetual-motion machine driven by gunpowder. But neither of these consumed Paine in the way his bridge did. Indeed, far from a gentlemanly hobby, bridge architecture became a career for Paine. In his essay on iron bridges, he wrote that he had had every intention of devoting himself fully to architecture but was drawn away by events beyond his control.

“The most disruptive of these was the 1790 publication by the British politician, and former friend of Paine, Edmund Burke, of Reflections on the Revolution in France. For Paine, Burke’s fierce denunciation of the course of events across the English Channel was about much more than France and its revolution; it was an attack on the political ideals on which his adopted country had been founded and on which a just future would depend. “The publication of this work of Mr. Burke,” Paine explained, “absurd in its principles and outrageous in its manner, drew me . . . from my bridge operations, and my time became employed in defending a system then established and operating in America and which I wished to see peaceably adopted in Europe.” The refutation of Burke became “more necessary,” for the moment, than the construction of the bridge.”

The political situation couldn’t be ignored in the way it directly intruded upon the lives of individuals and impinged upon entire communities, often with real world impacts. And the scathing, cruel words of Burke hit Paine hard, for Burke was someone he had considered a friend. Even so, he remained a working class bloke in his attitude and concerns. That is why bridge-building had taken hold of his attention, as a practical endeavor in building public infrastructure in a young nation that had little public infrastructure. It wasn’t that he was an aspiring technocrat in the budding bureaucracy, as his concerns were on a human level. He was born to a father who was a skilled tradesman. As such, he was trained from a young age to think like a builder, with the concrete skills of constructing something to be used by people in their daily lives.

Still, he had a restless mind. As an endlessly curious and lifelong autodidact, his interests were wider than most. He surely read far more than he admitted to. His claims of being unlearned were more of a pose to give force to his arguments, a way of letting his principles stand on their own merit with no appeal to authority. He preferred to use concrete imagery and examples than to reference famous intellectuals and philosophical rhetoric. He didn’t value learning as a hobby, an attitude held by aristocrats. He had no desire to be a casual dilettante or Renaissance man.

He was above average in intelligence but no genius. He simply wanted to understand the world in order to make a difference. Mainly, he had talent for communicating and writing, which helped him stand out in a world that gave little respect to the working class. But what gave force to his words was his ability and willingness to imagine, dream, hope, and aspire. He was a visionary.

Sure, he was an imperfect person, as are we all. But knowing who he was, he didn’t try to be anything else. He felt driven toward something and his life was the following of that impulse, that daimonic inspiration. Such internal motivation was an anchor to his life, steadying his course amidst strong currents and troubling storms. Forced to make his own way, he had to figure it out step by step along a wandering path through the world. He was no Adams or Burke trying to position himself in the respectable social order by playing the role of paternalistic professional politician. Instead, he dedicated his entire life to the values and needs of the commoner, as inspired and envisioned by our common humanity.

Thomas Paine was born a nobody, spent his life poor, died forgotten, and departed this world with little left to his name, having given away everything he had to give. Some have maligned his life and work as a failure, judged his revolutionary dream as having gone wrong. Others would disagree and recent assessments have been more kind to him. His words remain and they still have much to offer us, reminding us of what kind of man he was and what kind of society we might yet become. May a new Age of Paine come to fulfill these promises.

“I speak an open and disinterested language, dictated by no passion but that of humanity. To me, who have not only refused offers, because I thought them improper, but have declined rewards I might with reputation have accepted, it is no wonder that meanness and imposition appear disgustful. Independence is my happiness, and I view things as they are, without regard to place or person; my country is the world, and my religion is to do good. […]

“When it shall be said in any country in the world, my poor are happy; neither ignorance nor distress is to be found among them; my jails are empty of prisoners, my streets of beggars; the aged are not in want, the taxes are not oppressive; the rational world is my friend, because I am a friend of its happiness: When these things can be said, then may the country boast of its constitution and its government.”
 ~ Thomas Paine, Rights of Man

Ancient Complexity

The ancient world is always fascinating. It’s very much a foreign world. And so it helps us to challenge simplistic thinking.

Such things as ‘race’ did not exist as we know it. Even within a single group, it was difficult to determine who belonged or not. After asking “How, then, did you know a Jew in antiquity when you saw one?”, Shaye J. D. Cohen stated “The answer is that you did not.”

That is partly because there was so much mixing across populations and so many local variations within populations. Widespread influences and syncretism dominated the ancient world. Ancient people in a particular region had more in common than not, for they were the products of common cultural histories. Large frames of understanding encompassed diverse peoples.

This shared inheritance and mutual bond was acknowledged, sometimes even emphasized, by many going back millennia. It’s only over the distance of time that lines of distinction become hardened within historical texts. The lived reality at the time, however, was messier and more interesting.

That should force us to rethink our modern identities, as so much of who we think we are has been built on who we thought ancient people were. When we speak of ancient Greeks and Jews, the traditions of polytheism and monotheism, do we have any clue of what we are talking about? In our claims of being cultural descendants of earliest Western civilization, what do we think we have inherited?

Maybe we should ask another question as well. What have we lost and forgotten along the way?

* * * *

Of God and Gods
by Jan Assmann
Kindle Locations 730-769

From the viewpoint of monotheism, polytheism seems prehistoric: original, inal, primitive, immature, a mere precursor of monotheism. However, as soon as one changes this perspective and tries to view polytheism from within, say, from the viewpoint of ancient Egypt, polytheism appears as a great cultural achievement. In polytheistic religions the deities are clearly differentiated and personalized by name, shape, and function. The great achievement of polytheism is the articulation of a common semantic universe. It is this semantic dimension that makes the names translatable, that is, makes it possible for gods from different cultures or different regions and traditions within a to be equated with one another. Tribal religions are ethnocentric. The powers and ancestral spirits that are worshiped by one tribe are irreducibly and untranslatably different from those worshiped by another. By contrast, the highly differentiated members of polytheistic pantheons easily lend themselves to cross-cultural translation or “interpretation.” Translation works because the gods have a well-defined function in the maintenance of cosmic, political, and social order. The sun god of one group, culture, or religion is the same as the sun god of another. Most of the deities have a cosmic competence and reference or are related to a well-defined cultural domain, such as writing, craftsmanship, love, war, or magic. This specific responsibility and competence renders a deity comparable to other deities with similar traits and makes their names mutually translatable.

The tradition of translating or interpreting foreign divine names goes back to the innumerable glossaries equating Sumerian and Akkadian words, among which appear lists of divine names in two or even three languages, such as Emesal (women’s language; used as a literary dialect), Sumerian, and Akkadian. The most interesting of these sources is the explanatory list Ann sa ameli, which contains three columns, the first two giving the Sumerian and Akkadian names, respectively, and the third listing the functional definition of each deity.5 This explanatory list gives what may be called the “meaning” of divine names, making explicit the principle that underlies the equation or translation of gods. In the Kassite period of the Late Bronze Age the lists are extended to include such languages as Amorite, Hurritic, Elamite, and Kassite in addition to Sumerian and Akkadian. In these cases the practice of translating divine names was applied to very different cultures and religions.

The origin of this practice may be found in the field of international law. Treaties had to be sealed by solemn oaths, and the gods who were invoked in these oaths had to be recognized by both parties. The list of the gods involved conventionally closed the treaty. They necessarily had to be equivalent in terms of their function and, in particular, their rank. Intercultural theology became a concern of international law.

The growing political and commercial interconnectedness of the ancient world and the practice of cross-cultural translation of everything, including divine names, gradually led to the concept of a common religion. The names, iconographies, and in short, the cultures might differ, but the gods remained the same everywhere. This concept of religion as the common background of cultural diversity and the principle of cultural translatability eventually led to the late Hellenistic outlook, where the names of the gods mattered little in view of the overwhelming whelming natural evidence for their existence and presence in the world.

The idea that the various nations basically worshiped the same deities albeit under different names and in different forms eventually led to the belief in a “Supreme Being” (Gk. Hypsistos, “the Highest One”).6 It essentially comprised not only the myriad known and unknown deities but also those three or four gods who, in the contexts of different religions, play the role of the highest god (usually Zeus, Sarapis, Helios, and Iao = YHWH). This super-deity is addressed by appellations such as Hypsistos (supreme), and by the widespread “One-God” predication Heis Theos. Oracles typically proclaim particular gods to be a unity comprised prised of a number of other gods:

One Zeus, one Hades, one Helios, one Dionysos, One god in all gods.7

In one of these oracles, Iao (YHWH), the God of the Jews, is proclaimed to be the god of time (Olam-Aion), appearing as Hades in winter, Zeus in springtime, Helios in summer, and “Habros Iao” in autumn.8 tumn.8 These oracles and predications manifest a quest for the sole and supreme divine principle behind the innumerable multitude of specific deities. This is typical of the “ecumenical age” (Voegelin) and seems to correspond to efforts toward political unification.9 The belief in the “Supreme Being” (Hypsistos) has a distinctly universalist character.

The sons of Ogyges call me Bacchus,
Egyptians think me Osiris,
Mysians name me Phanaces,
Indians regard me as Dionysus,
Roman rites make me Liber,
The Arab race thinks me Adoneus,
Lucaniacus the Universal God.10

This tradition of invoking the highest god according to the names given him by the various nations expresses a general conviction in Late Antiquity regarding the universality of religious truth, the relativity of religious institutions and denominations, and the conventionality of divine vine names. According to Servius, the Stoics taught that there is only one god with various names that differ according to actions and offices. Varro (116-27 BCE), who knew about the Jews from Poseidonios, was unwilling to differentiate between Jove and Yahweh because he felt that it mattered little by which name the god was called as long as the same thing was meant (nihil interesse censens quo nomine nuncupetur, dum eadem res intelligatuz).11 Porphyry felt that the names of the gods were purely conventional.12 Symmachus, a pagan prefect, wondered what difference it made “by which wisdom each of us arrives at truth? It is not possible that only one road leads to so sublime a mystery.” 13 Celsus argued that “it makes no difference ference whether one calls god `Supreme’ (Hypsistos) or Zeus or Adonai or Sabaoth or Ammon as the Egyptians do, or Papaios as do the Scythians. The name does not matter when it is evident what or who is meant.” 14 In his treatise on Isis and Osiris Plutarch makes this point, stating that no one would “regard the gods as different among different nations nor as Barbarian and Greek and as southern and northern. But just as the sun, moon, heaven, earth and sea are common to all, although they are given different names by the various nations, so it is with the one reason (logos) which orders these things and the one providence which has charge of them.” 15 Seneca stressed that this conviction was based on natural evidence: dence: “This All, which you see, which encompasses divine and human, is One, and we are but members of a great body” 16 According to Mark Smith, “Pliny the Elder (Natural History, bk. 2, V. 15) put the general point in a pithy formulation for deities in the world, that they are a matter of `different names to different peoples’ (nomina alia aliisgentibus).”17

The Mythic Past
by Thomas L. Thompson
pp. 306-308

Such theology in which the Bible, both Old and New Testaments, shared, reflects a world-view whose centre lay in an awareness of human ignorance and of the deceptiveness of sensory perception, associated with a nearly universal recognition of ineffable and transcendent qualities in life, fertility and wisdom.

This theology was an aspect of ancient philosophy and science. It was not so specifically Hellenistic as it was a product of the unified intellectual culture that had been created by the empire as early as the Assyrian period. It is a knowledge that is specifically set in contrast and in opposition to the old story-worlds of gods. From the early historian Hecateus to Plato and the Greek playwrights, and from the Babylonian Nabonidus to Isaiah and the author of Exodus, polytheism and monotheism were hardly ever opposed to each other. They rather reflected different aspects of a common spectrum of intellectual development. There was continuity between polytheism and monotheism as well as a process of changing interpretation. Hardly sudden or revolutionary, the changes of world-view were the result of more than a millennium of cultural integration. Crisis in such change was associated first of all with the understanding of divine transcendence, and with ideas regarding the truth, function and legitimacy of the personal gods of story. The struggle over beliefs about the unity of the divine came late and always had an explicitly political focus.

From at least early in the Assyrian period of the empire, in what are often thought of as the polytheistic worlds of Egypt, Syria and Mesopotamia, reflective people had well understood a clear difference between the gods themselves and the statues and images that were used to represent them. They also understood the difference between the forces of nature and the divine powers that had created them. […]

In the simpler West Semitic world these tendencies were much more clearly marked. In the world of trade and shipping, where contact among many cultures and languages was commonplace, Syrian and Phoenician merchants readily identified specific gods of one region with the gods of similar function of another region. In this way, Astarte could be identified with Ishtar in the east and with Venus in the far west. Yam could be identified with Poseidon, and Ba’al with Hadad and Yahweh. Such syncretism was encouraged by the fact that many of the names of West Semitic deities directly reflected a given deity’s function. The name El translates simply as ‘God’ and is easily identified with the Aegean world’s Zeus. Ba’al translates as ‘master’ or ‘husband’, Mot as ‘death’, Yam as ‘sea’, and the like. At times they distributed the functions differently, so that for instance Ba’al could be identified with both Hadad and El, or Yahweh with both Ba’al and Elohim. It is only a very small step to recognize that implicit in such gods were functions of a single divine world. As different peoples gave these functions different names, the recognition came quickly that the gods themselves differed from each other because of distinctions given to them by humans. The specific gods that people knew were the gods they themselves had made to express the divine world.

One of the most frequent ways that West Semitic story, poetry and prayer particularized gods for very specific functions was to take the name of a high god – usually El, but Ba’al, Hadad and Yah web were used as well – and add a descriptive epithet. In this way, El developed many different faces. He was ‘the most high’, ‘the merciful’, ‘the god of the storm’. Also places or names of specific towns or regions made these nearly universal deities more particular. So we fold ‘Yahweh of Samaria’ and ‘Yahweh of Teman’. We have Ba’al and his Asherah, as well as Yahweh and his , without confusion. The divine name came to reflect a very particular local deity. At the same time, a default understanding of universalism is implicit. […]

The development of such an understanding of monotheism was hardly antagonistic to the worship of a variety of gods. Quite the contrary, this variety of gods, of individuals and of gods who changed, was a necessary aspect of human relationships to the transcendent. Both the gods of tradition and the forms of worship became subject to such critical thought, Gods, as human reflections, could be false, just as their worship could be empty and corrupt. By the late fifth century BCE, one or other form of this transcendent monotheism is known in many different regions. In Greece, it can best be seen in the writings of Plato about the One, True, Good and Beautiful. In Babylon the god Sin is spoken of in some texts in the same way as Ba’al Shamem is in Syria. In Persia, Ahura Mazda is frequently so understood. In the Bible itself, many of the references to ‘God’ (Elohim) , the ‘God of heaven’, ‘God the most high’, and ‘God’ in such expressions as ‘God’s Yahweh’ should be understood in this way. […]

One could dismiss the gods of stories and legend. One could also find a way of thinking of them with integrity. It is this kind of task that exercised many ancient writers of the late Persian and early Hellenistic periods. The early writers of the Bible were among them.

The first story in the Bible in which God meets Moses, the story of the burning bush in Exodus 3–6, illustrates well how a revision of a tradition’s stories can revive and modernize old world-views and outmoded traditions by understanding them in new ways. Traditional beliefs in the old gods of Palestine are saved in Exodus by having Yahweh, the long-forgotten god of ancient Israel, understood not as God himself, but as the name, the representative of the true God – the way that ancient Israel knew the divine. Yahweh recurrently plays a role in the Bible’s narratives as mediator between the Most High and Israel, sharing this role with his messiah, with his son, with the king and the prophets. Yahweh is the primary means by as expressed by the Yahweh tradition also plays the role of the philosopher’s stumbling block, as it does so forcefully throughout the Book of Job. On the other hand, Yahweh is equally freely identified with the true God, when addressed directly in prayer and song, and plays that role in some narratives. The story of the theophany in the burning bush in Exodus 3–6 also explains how the divine had been hidden in the worship of the even more ancient gods of Palestine: the lost gods of the patriarchs. These included even an El Shaddai who was no longer either worshipped or remembered. The ‘true meaning’ of the many, now fragmented stories of patriarchs and heroes is gathered together to be remembered and preserved in a way that the past had not grasped or properly understood. More than that, the writers of the Bible were free to create all their great stories about Yahweh, that they might reflect how old Israel had understood – and had misunderstood – the divine.

Stories from Ancient Canaan
edited by Michael David Coogan
pp. 77-78

[T]he language used of Baal as storm god is echoed in the description of Yahweh, the god of Israel, who

makes the clouds his chariot,
walks on the wings of the wind,
makes the winds his messengers,
fire (and) flame his ministers
(Psalm 104:3-4)

As Baal defeated Sea, so also did Yahweh:

With his power he stilled the sea,
with his skill he smote Rahab,
with his wind he bagged Sea,
his hand pierced the fleeing serpent.
(Job 26:12-13)

Similar mythological language occurs in Psalm 89:10 and Isaiah 27:1.

Baal’s adversary has the double title “Prince Sea” and “Judge River”; “sea” and “river” occur frequently in biblical poetry as parallel terms. Most interesting in this context is the application of the pair to the two bodies of water that Yahweh mastered, enabling his people to escape Egypt and enter Canaan:

When Israel came out of Egypt,
the house of Jacob from people of a
different language . . . .
The sea saw and fled,
the Jordan turned back
(Psalm 114:1, 3)

Just as the Reed Sea was split so that the Israelites crossed on dry land, so too the Jordan miraculously stopped and the chosen people entered the promised land with dry feet (Exodus 14:22; Joshua 3:13). The repetition of the event is rooted in the old poetic formula; sea and river are two aspects of the same reality.

pp. 80-81

The Canaanite temples on which the description of Baal’s house is based were the primary analogues for the temple of Yahweh in Jerusalem, planned by David and built by his son Solomon. Solomon’s architects and craftsmen were Phoenicians who used cedar from the Lebanon for both the temple and the adjacent royal residence. The juxtaposition of temple and palace was deliberate: the deity guaranteed the dynasty and was purposely identified with t. This adoption of Canaanite theory and practice in the house of the god of Israel was responsible for prophetic opposition to the temple from before its construction until the last days of its existence:

“Thus says Yahweh: Would you build me a house to lie in? I have not lived in a house since the day I brought the Israelites up from Egypt until today, but I walked among them with a tent as my divine home. In all the places I walked with the Israelites, did I ever say to one of the Israel’s judges, whom I commanded to shepherd my people Israel, ‘Why haven’t you built me a house of cedar?’ ” (II Samuels 7:5-7)

Despite this conservative resistance the temple was built, and at its dedication Solomon prayed to Yahweh using words that could have been addressed to Baal:

“Give rain to your land, which you gave to your people as their inheritance.” (I Kings 8:36)

The acquisition of a house marks the climax of Baal’s ascent to the kingship, a climax marked by his theophany in the storm and his assertion,

“No other king or non-king
shall set his power over the earth.”

Baal’s centrality in Ugaritic religion is demonstrable. For instance, a significant index of popular beliefs is the use of divine elements in personal names; at Ugarit the most frequently occurring deity in names is Baal, including is other names and titles, such as Hadad.

The transfer of power from an older sky god to a younger storm god is attested in many contemporary eastern Mediterranean cultures. Kronos was imprisoned and succeeded by his son Zeus, Yahweh succeeded El as the god of Israel, the Hittite god Teshub assumed kingship in heaven after having defeated his father Kumarbi, and Baal replaced El as the effective head of the Ugaritic pantheon. A more remote and hence less exact parallel is the replacement of Dyaus by Indra in early Hinduism. These similar developments can be accurately dated to the second half of the second millennium B.C., a time of prosperity and extraordinary artistic development, but also of political upheaval and natural disasters that ended in the collapse or destruction of many civilizations, including the Mycenaean, Minoan, Hittite, and Ugaritic. This was the period of the Trojan War, of the invasion of Egypt and the Palestinian coast by the Sea Peoples, of the international unrest related in the Amarna letters. In such a context a society might suppose that its traditional objects of worship had proved ineffective, that the pantheon in its established form had, like an entrenched royalty, become incapable of dealing with new challenges. At this point it might choose an extradynastic god, as Ugarit chose Baal, son of Dagon and not of El; and, beset by invasions from the sea and tidal waves arising from earthquakes, it might construct a mythology in which the new god demonstrated his mastery over the sea.

Studies in the Cult of Yahweh
by Morton Smith
pp. 53-54

It is to this new Renaissance world of the Assyrians and the Phoenicians, the Lydians and the Greeks, that the Israelites, a new people of recent invaders, belong. They belong to the beginning of the iron age culture, not to the end of the bronze. Their cultural history is paralleled most closely by that of the iron age Greeks: savage invaders of the thirteenth and twelfth centuries, they soon assimilated some elements of the culture they had overrun, but reshaped these by their own standards and interests and combined them with new elements from the new [35] world around. Learning the alphabet from the Phoenicians, they began to write down their heroic legends and those of old holy places in their land (J and E, the Homeric epics and hymns); in these we can sometimes see the dim outlines of bronze age legends, but the heroes have become nomads and chieftans of the invasion period, and the mentality and language is that of early monarchies. As civilization developed, wealth and trade and social injustice increased, and the prophets emerged to denounce the wickedness of the rulers, defend the poor, and foretell the coming of the judgment of Yahweh or Zeus—Amos and Hesiod are conspicuously close in date, message, and prophetic vocation. But prophetic preaching was not directly effective. What the people needed for their protection was the publication of their laws, hitherto a matter of tradition in the heads of the rich—the city elders and the priests. Consequently, Deuteronomy and Draco are almost exact contemporaries, and both the social concerns and the proposed remedies of the deuteronomist are in many points parallel to those of Solon, who lived only a generation later. At the same time intellectual development is going on. Soon the gnomic “wisdom” poetry, traditional all over the near east, will be developed by gifted individuals like Theognis and the author of the first section of Proverbs to serve their own didactic purposes. From such beginnings will come, a century later, such great theologico-philosophical and dramatic dialogues on the problem of evil as Job and Prometheus Bound. And after these great peaks of poetry the writers of both peoples will turn to topics of more “human interest.” Speculative thought will concern itself with the good life (so Epicurus and Ecclesiastes), narration will turn to hellenistic romances like Judith and Tobit. This is the outline of Israelite literature, and it belongs part and parcel, soul as well as body, to the iron age and to the Mediterranean, not to the Mesopotamian world.

pp, 99-101

All in all the evidence seems to indicate that while the Hebrew and Aramaic elements were more frequent in Palestine, and especially in Judea, and while Greek elements were more frequent elsewhere in Roman territory, nevertheless, the range of possible variations was roughly the same. Even rather extreme variants turn where we should least expect them, e.g. substantial evidence for the Essene influence has been found in the Epistle to the Ephesians (Kuhn, NTS 7.334ff.). It would not be implausible to suppose that a few aristocrats in Jerusalem had the sort of Greek education and philosophical attitude that we find in Philo. Although Josephus’ Greek was none too good, his rival, Justus of Tiberias, was much better at home in the language (Josephus, Vita 40 and 340) and was remembered in philosophical literature for one of his anecdotes about Plato (Diogenes Laertius 2.41).

Consequently the common in toto distinction of “Palestinian” from “diasporic” (not to mention “hellenistic”) Judaism is simply unjustified. […]

And what were the Samaritans? The destruction of the Gerizim temple in Hyrcanus’ time is most plausibly understood as an attempt at religious Anschluss: Thereby the Samaritans would be forced to bring their sacrifices to Jerusalem and subject themselves to the Hasmonean High-Priesthood, which evidently considered them as potential Ioudaioi—adherents [302] of the Judean cult. How many of them consented to enter the fold? How many refused and resorted to surreptitious sacrifices without a temple, or contented themselves with synagogue worship? We have no way of telling. Josephus distinguished “Samaritans” as an ethnic group, and was contemptuous of them as he was of Idumeans and Galileans, who by this time were undoubtedly “Jews”—i.e., adherents of the Jerusalem cult—but whom Josephus often distinguished from Ioudaioi when he used the latter term to mean (territorial) “Judeans.”

It is time to tear away this cobweb of nomenclature and try to see the facts it conceals. We have to do with the gradual extension through the Greco-Roman world (and through Arabia, Mesopotamia, Armena, and Iran, which are usually ignored) of a peculiar cult and its associated literary, legal, and social traditions.

Part of the literary tradition was the legend that the cult had once been peculiar to a single family—allied tribes are often linked by such familial legends. This legend has persisted to the present: Christians, like Jews, are still theoretically one family, the “Israel of God.” However, already in antiquity the members of this theoretical family seem to have showed no significant physical uniformity. I do not recall any ancient reference to a man’s being recognized, from his physical appearance, as a Jew, except when the recognition was an inference from circumcision. (And even circumcision was not specific; it occurred among Arabs and Egyptians.) We can be reasonably sure that in the Greco-Roman period the followers of this cult had been so diversified by intermarriage, adoption, conversion, and adherence, that its spread cannot be considered as that of a single genetic stock.

The one thing common to all forms of the cult was the god called Yahweh, Yah, Iao, etc., who was often associated with various titles and epithets—Elohim, Adonai, Sabaoth, He who hears prayer, He whose name is blessed, etc. Most of these epithets, in one place or another, seem to have hypostatized as independent but associated deities. [303] Yahweh might also be associated with other gods, of whom a long list could be compiled from the Old Testament times on. His most famous associate, of course, was to be Jesus. In a few systems of the sort usually called “gnostic” Yahweh appears as an inferior god, and he so appears, too, in a good many unsystematic magical texts. He was also included in various syncretistic expressions of late Roman paganism, for instance the famous Clarian oracle: “I declare Iao to be the highest god of all, the Hades for winter, Zeus of beginning spring, Helios for summer, and splendid Iao of autumn.” (Macrobius, Sat. 1.18).

To what extent such theological effusions implied worship is uncertain, but there is no question that the worship of Yahweh by pagans was ancient and extensive. Ezra proudly records the offerings made to Yahweh by the Persian emperors; the refusal of the Jerusalem temple staff to accept sacrifices offered by the Romans was the official beginning of the revolt A.D. (Josephus, War 2.409). Therefore to discuss the spread of this cult in terms of “the extension of Judaism”—whatever one means by “Judaism”—is to discuss only one part of a complex process. The neglected part of this process, which badly needs study, was an important factor in the extension of declared “Judaism,” since Dio Cassius reports that the name Ioudaioi was commonly applied to whatever men followed Jewish customs (37.17.1).

This report illustrates our need for another study—that of the ancient definitions of “Jew” and “Judaism,” with careful attention to the different users of the terms and the circumstances of the usage. We have already seen some of the ambiguities of the terms in antiquity—the fluctuation between religious and territorial usage (the Idumeans were Ioudaioi because adherents of the Jerusalem temple, but not Ioudaioi because not natives of Judea), the fluctuation between references to temple adherence and reference to general religious pattern (the adherents of the Onias temple were Ioudaioi by general pattern, in spite of their rejection of Jerusalem), the uncertainty as to which variations of the pattern, and how man of its [304] elements, are referred to. (Were the Samaritans Ioudaioi, or the Christians, or the sebomenoi? And so on.) An even more serious difficult results from the modern specialization of “Jew” to refer to the adherents of rabbinic Judaism and their descendants, plus a few minor groups—the Karaites, the Falashas, and the like. Because of this modern usage, students of first century “Judaism” commonly take for granted that, even though rabbinic Judaism had not yet developed, something very like it was the common form of the religion, at least in Palestine, and all other groups are to be seen as divergent from this primitive stock. An extreme absurdity is reached from this notion when the Judaism of the high-priestly families of the Jerusalem temple itself, who are supposed to have been mostly Sadducean, is represented as a divergence from pretendently “normative” Pharisaic Judaism.

pp. 130-131

The first of these facts—which we should never have expected even from the Greco-Roman literary remains—is the wide extent of iconic decoration from the second century on. Of course, there were some references to iconic decoration in the literature: even Herod Agrippa I, the friend of the Pharisees, had in his palace at Caesarea statues of his daughters. But hitherto such details could be treated as exceptional. Now that the materials has been collected it appears that decoration with human figures was customary even in Jewish religious buildings. The second and third century catacombs of Rome show Victory crowning a youth, Fortuna pouring a libation, cupids, adolescent erotes, and so on. A similar catacomb is reported near Carthage. The second or third century synagogue of Capernaum had over its main door an eagle, carved in high relief. Over the eagle was a frieze of six naked eotes, carrying garlands. Inside was not only a frieze containing human, animal and mythological figures, but also a pair of free-standing statues of lions, probably in front of the Torah shrine. The synagogue of Chorazin, of about the same date, had similar statues and a frieze showing vintage scenes of the sort traditionally associated with the cult of Dionysus. Remains of some dozen other synagogues scattered about Palestine show traces of similar carved decoration. There are human figures in high relief in the second-to-fourth century catacombs of Beth Shearim. From the same period the synagogue of Dura shows a full interior decoration of the frescoes representing Biblical scenes. From the fourth and fifth century synagogues of Palestine we have a half dozen mosaic floors, and there is reason to believe that in about half of them the central panel was occupied by a picture which, if not found in a synagogue, would be recognized as a representation of the sun god driving his chariot.

So long as these remains were studied one group at a time, they might be explained as heretical. This is now impossible. On the other hand, it is dangerous to explain them as orthodox first, because the meaning of orthodoxy is uncertain for this [491] period, second, because the carved decoration of the Galialean synagogues shows deliberate mutilation: human and animal figures have been chipped away carefully, so as to leave the rest of the carving undamaged. Similarly, the eyes of some figures in the Dura synagogue have been gouged out, but the rest of the faces left unmarked. Again, a sarcophagus in Beth Shearim was broken up in ancient times, probably because it showed Leda and the swan and other carved figures. Unfortunately, the date of the mutilations in the carved synagogues is a matter of dispute. Those who maintain that carved decoration was always permitted by orthodox Judaism can blame the destruction on the Moslems. But if these synagogues housed orthodox Judaism, then it must have been somewhat different than it is pictured by the rabbinic literature.

pp. 184-186

Religious symbols are among the objects that produce emotional reactions in their observers (make them feel secure, hopeful, etc). The [54] emotional reaction produced by a symbol is its “value,” as distinct from its “interpretation,” which is what the people who use it say it means. The value of a symbol is always essentially the same, the interpretations often change. (Thus the picture of a wine-cup produced from time immemorial its “value,” a feeling of euphoria, although its “interpretation” as a reference to Christ’s salvific blood began only with Christianity.) So long as an object commonly produces its “value” in the observers, it is a “live” symbol. Once the “value is no longer commonly produced, the object is a “dead” symbol. One social group may take over symbols from another. When “live symbols are taken over, they retain their former values, but are commonly given new interpretations. In the Greco-Roman world there was a “lingua franca” of “live” symbols, drawn mostly from the cult of Dionysus, which both expressed and gratified the worshipers’ hope for salvation by participation in the life of a deity which gave itself to sacrificial death in order to be eaten by its followers and to live in them. The Jews took over certain of these “live” symbols. (In Palestine, before 70, because of the anti-iconic influence of the Pharisees, they took only geometric objects, fines, grapes, and the like; elsewhere—and, in Palestine, after 70, when Pharisaic influence declined—they took also figures of animals and human beings.) Since these symbols were “live” in Greco-Roman society, the Jews must have known their “values” and adopted the symbols for the sake of those “values.” Therefore the Jews must have hoped for mystical salvation by participation in the life of a self-giving deity, probably through a communal meal. However, since they worshipped only Yahweh, they must have imposed, on these pagan symbols, Jewish interpretations. These interpretations, as well as the symbols unchanged “values, ” may be discovered in the works of Philo (the chief remains of this mystic Judaism) and also occasionally n the other Jewish literature of the time, and in early Christian works. (The rapid development of Christian theology and art suggests they arose from similar prior developments in Judaism.) The same sources indicate the “values” and interpretations which the Jews found in and imposed on those objects of Jewish cult which they now began to use as symbols: the menorah, the Torah shrine, and so on. For “values,” however, all literary sources are secondary; the symbols must first be allowed to speak for themselves. Rabbinic literature is particularly unreliable as to both the “values” and the interpretations of the symbols, since “the rabbis” were both anti-iconic and opposed to mysticism. Their religion was a search for security by obedience to a law laid down by a god essentially different from man; with this god no union was possible, and his law forbade making images. The widespread use of mystical symbols testifies, therefore, to a widespread mystical Judaism indifferent, at best, to the authorities cited in rabbinic literature. To judge from the archaeological evidence, rabbinic Judaism must have been, by comparison with the mystical type, a minor sect. [55]

pp. 228-229

It is immediately obvious that we have a striking parallel to the Johannine story of Jesus’ miracle at Cana in Galilee, John 2:1-11, when Jesus comes to a marriage feast where the wine has run out and changes water into wine.

Scholars had long maintained that the Cana story had a Dionysiac background. It has no Synoptic parallel; the miracle it reports is not even of the same type as any of the miracles reported in the Synoptics; in this respect it is unique among the Johannine stories of Jesus’ “signs”; therefore the supposition that it came from some alien tradition was plausible. Moreover, its introduction could be [818] explained by a polemic motive found elsewhere in John—the desire to contrast Jesus as the true, spiritual vine, with Dionysus, the false, earthlyone. However, the Dionysiac myths cited as parallels were somewhat remote in time, geography, and content. Consequently a plausible case could be made out by those who wanted to deny the Dionysiac reference and to explain the story wholly from Jewish analogies and concerns—the multiplication of food and “healings” of springs in the Old Testament, the contrast, in the Johannine story, between the Jewish water of purification, which is transformed, and the wine of Christian baptism with the spirit, which Jesus gives to men. Now, however, the main reasons for denying a Dionysiac reference are refuted. It appears that when the Gospel according to John was being written there was next door to Galilee, at Sidon (a city whose territory Jesus is said to have visited, Mark 7:31, and from which men came to Galilee to hear him, Mark 3:8) an established public festival of which a myth obviously similar to the Cana story was the central theme. [819]

pp. 233-235

Its presumably Jewish source therefore serves to strengthen Bickerman’s theory that it was the Jews of Italy who represented Yahweh as Sabazius in 139 B.C. A century later we find the ivy wreath and grapes on the coins of Antigonus Mattathias (40-37); then, over the entrance of Herod’s temple, the great golden vine which seemed to some a proof that the resident deity was Dionysus. After this comes the coinage of the Jewish revolts in 66-73 and 132-35, with its chalices and amphorae, wine leaves and grape clusters. (The lyre on the coins of the second revolt recalls the one in the hands of the satyr in the Beit Lei burial cave.) [Pausanias had heard of some shrine in Palestine which was said to be the tomb of a Silenus (.24.8).] The [824] evidence of the coins is continued by the decoration of Palestinian tombs and synagogues, much of it using familiar Dionysiac vocabulary.

The popularity of the cult of Dionysus in Palestine, the sense which pagans found in these symbols (as shown by the interpretations already cited) and the frequency with which these symbols were used on religious buildings and funerary objects (where ancient decoration was commonly significant)—these factors taken together make it incredible that these symbols were meaningless to Jews who used them. The history of their use shows a persistent association with Yahweh of attributes of the wine god. This association explains why some Jews identified the two deities, others at least chose Dionysus as the interpretatio greaeca of Yahweh, and yet others contrasted Yahweh—or his logos—as the true vine, with Dionysus the false one.

If we now look for the source of this association we are led back to very early Biblical material. For it cannot be supposed that the wine god came to Palestine only with the Greeks—not even though the Greeks came there in the second millennium B.C., as their pottery indicates. Palestine was presumably a grape growing country from time immemorial, and the god was doubtless as old as the use of his gifts. If the Biblical story (Exod 6:3) is to believed Yahweh was a new-comer in the country. His association with the wine god—and with other deities—will have begun with the conquest or shortly after. By the time of our earliest Biblical texts he had become a rather complex concept: he had been completely syncretized with El, Elyon, and Shaddai; with other dieties, like Tsur, Gad, and Am, his relations were certainly close. As for the wine god(s)—such phenomena as the Rekhabite reaction and, even more, the survival of the nazirate as the status perfectionis of a man devoted to Yahweh, indicate that Yahweh’s original [825] attitude to wine was not a friendly one. Down to the time of Hosea, most Israelites believed that wine was given by Baal, not Yahweh. On the other hand, the author of Judges 21:12ff. thought that in the time of the Judges the girls of Shiloh already went out to dance in the vineyards on the feast of Yahweh—as they still did in the time of the Mishna (Ta’ anit 4:8); the legend of Jepthah’s daughter looks like a nationalistic explanation of a maenadic custom; wine, though secondary in sacrifices to Yahweh, became an accepted element in them after the tribes settled down; and there is no doubt that one of the original elements of Sukkoth was a vintage festival. Plutarch’s source and his own judgment were right about this—the feast was certainly sacred to a wine god. But who was this wine god, and where did he first become associated with Yahweh?

It is possible that a partial answer to these questions may be found in Genesis 18:1-15, the myth of Abraham’s entertaining Yahweh and two angels and receiving as a reward the promise of a son from Sarah. Ever since the time of Delitzsch this myth has been recognized as an example of a common type, known from many Greek and Roman parallels. Numerous scholars have [826] remarked that the myth presumably was that of the sanctuary of Hebron—the center of a grape growing district, as Skinner observes—and many have thought it pre-Yahwist, since it tells of the appearance of three deities, not one. The Yahwist retelling, with its change from the plural to the singular, has created difficulties in the text. It is not improbably that the three deities and their host, the original hero of the shrine (who may or may not have been named Abraham) made up the “four” referred to in the ancient name—or nickname—of Hebron, Kiryat ha ‘arba’, “the city of the four,” commonly syncopated to Kiryat ‘arba’. And it is not unlikely that the three deities may be the three giants whom the Yahwist hero Caleb drove out of Hebron when he took it (Josh 15:14; Judges 1:20; cp. Josh 11:21f.; 14:15).

The names of the three figures at Hebron differ from one Biblical source to another, for reasons we cannot now discern. In one set of stories they are Sheshai, Ahiman, and Talmai, Aramaic names which, as Moore remarked, tell us nothing. In Gen 14:13, however, they are Mamre, Ehskol, and Aner, all of whom made a covenant with Abraham (As did the deity who appeared to him in Hebron, if Gen 15:18 or 17:1-27 is to be associated with that shrine). Of these, Mamre and Eshkol had their individual holy places, Mamre at his oak grove and sacred well, modern Ramet el Halil, just outside Hebron, Eshkol in his valley famous for its grape vines—his name means “grape cluster.”

p. 237

Sozomen then goes on to tell how St. Helena, the mother of Constantine, got wind of these goings on and persuaded Constantine to build a church on the site and prohibit all non-Christian worship. In this report his narrative is based on Eusebius’ Life of Constatine. From Constantine’s letter, as quoted by Eusebius (3.51f.) it appears that there were pagan idols and a pagan altar on the site. These the emperor ordered destroyed. From their destruction—presumably—came the fragments discovered by Mader in his excavation of the site. Of these fragments the one certainly recognizable was a head of Dionysus. Or should we say, Dionysus-Yahweh-Eshkol?

Orphism and Christianity in Late Antiquity
by Miguel Herrero de Jáuregui
pp. 113-116

Groups of pagans with religious inclinations close to Judaism sometimes associated themselves with local Jewish communities, and this kind of Jewish acolyte — from the ranks of which came many of the earliest converts to Christianity — is likely to have acted as a connection to other Greek cults. A recent study by Stephen Mitchell, for instance, clearly establishes that the pagan cult of Theos Hypsistos — a single and transcendent deity, whose name could not be spoken — was the result of strong Jewish influence on pagan culture in Asian Minor. This is the most spectacular example of Hellenic adoption and appropriation of Jewish beliefs. However, the presence of Biblical ritual and theologcla elements, particularly the use of the name “Iao” — evidently connected with “Yahweh” — in religious and magical papyri of Imperial Egypt, makes clear that this Judaizing influence spread through many other religious and literary circles in which Orphic elements also appear. The existence of Jewish elements is furthermore well established in Hermetic and Gnostic literature, where the influence of Orphism has already been noted. The existence of an Orphic-Jewish or Jewish-Orphic syncretism is a practically inevitable conclusion given such trends.

Ideological assimilation is of course a bilateral process, though ancient apologetic writings tend to transmit only cases of Jewish influence spreading outward to affect the Hellenic culture around it. This bias creates the impression of a Judaism kept free of syncretistic contamination and exerting its energies in a purely unidirectional fashion. It is true that Jewish orthodoxy grows more and more passionate over time in its exclusion of Greek influences: for example, the translation into Greek of the Bible becomes more literal from the Septuagint until the absolutely literal translation of Aquila. However, the strict boundaries set by orthodoxy and apologetics are artificial, and it is not unlikely that users of many of the Orphic-Jewish syncretistic texts were in fact Jewish themselves, or very close to Jewish communities. The motivation behind the reaction of those Jewish purists who resist Hellenic thought and culture, and the work of those internal apologists who seek to reaffirm the strength of orthodox doctrine to the faithful, appears to lie precisely in fear of Jewish religious identity being lost against this syncretistic background. The entirely reactive character of the movement, however, attests to the great fluidity of actual religious practice. It is the same situation the Christian apologists will later confront when they come to polemical attack against a nascent pagan-Christian syncretism — whose strength is revealed not only in the direct references made by the apologists themselves, but also in the vitriol with which they decry it.

It is in Palestine that the dramatic ambiguities in religious cult and belief of the time are revealed in greatest detail and in particular in the cult of Dionysus. The cult in question is connected specifically with Baccus’ role as god of wine, and while we do not have sufficient evidence to link the cult — universal throughout the Mediterranean — with the mysteries or with the Orphic tradtion, the Jewish attitude toward it is precisely the same as is found in relation to cults of a more explicitly Orphic flavour. the presence of the Bacchic cult in Palestine and the assimilation of Baccuus to Yahweh are very well attested, and it is presumably because of the pervasiveness of such syncretism that contemporary denunciations of Bacchic rituals are so harsh. When Antiochus Epiphanesestablished the worship of Dionysus in his attempts to Hellenize Judea in 16 B.C. (2 Mac 5-6), he was doubtless trying to institutionalize an already-existing syncretism for political self-interest. His attempt was nevertheless condemned by the ultimately prevailing orthodoxy of the time as ” the abomination of desolation” (Dn. 12:11). The Wisdom of Solomon, the last canonical book of the Old Testament, was written in Alexandria in the first century BC and harshly criticizes the Bacchic teletai (Wis 12.3-7). Dionysus becomes in this account the heir to the Baal of the Canaantes, an object of worship whose relationship to Yahweh was marked, on the evidence of the Old Testament synthesized in the classic work of Ranier Albertz (1994), by a similarly widespread syncretism existing in the face of constant official condemnation. Pagan authors such as Plutarch, Valcrius Maximus, and Tacticus differ on whether the Jewish Yahweh should be identified with the Greek Dionysus. The Orphic tradition does not appear to play any direct role in this process of assimilation and separation, but the tendency to Bacchic assimilations should be borne in mind when attempting to explain the absence of Dionysiac elements from those aspects of Hellenic culture ultimately accepted into Jewish orthodoxy. From its perspective, the most similar is the most dangerous, for it tends most powerfully towards uncontrolled assimilation.

One may attempt to differentiate from this kind of clear-cut syncretism some instances of cultural assimilation which attest a much more managed process, guided by an orthodoxy that sought to hellenize Judaism while preserving its inner core. External accomodation to the Hellenistic cultural milieu may be in principle distinguished from actual cultic syncretism. For example, when Philo depicts the community of Jewish therapeutai as if they were initiates in the Bacchic mysteries, such assimilation is purely an external metaphor does not mean that these therapeutai adored Bacchus in any way. However, such clear distinction is not always conceptually fixed. The figure of Orpheus, precisely, offers and excellent example. There is one possible textual instance of an external assimilation between David and Orpheus. Psalm 151 in the Septuagint version talks about David — the putative author of the Psalm — praising God with his lyre. The Psalm was only known in its Greek translation until 195, when the Hebrew version discovered in the manuscripts of Qumram was published. This purported original version has two lines (2b-3) which are absent from the Greek version: “And [so] have I rendered glory to the Lord, thought I, within my soul/The mountains do not witness to him, nor do the hills proclaim; the trees have cherished my words and flock my deeds.” These lines that state the power of the singer over nature suggested from the beginning that there was a conscious appropriation of the myth of Orpheus, the singer who enchants nature, to depict David’s song. The reading and translation of these lines is, however, very controversial, and this interpretation has been hotly debated. However, the fact that the Greek version of the Psalm lacked precisely those two verses may be a sign of some censorship of unclear lines whose assimilation of a Greek myth may have been excessive for the orthodox Jewish translators.

Whatever the facts of the case of Psalm 151, this presentation of David as Orpheus is clearly found in several images found in synagogues of the eastern Empire — the most famous being the frescoes of Dura-Europas in the third century AD and a mosaic in Gaza of the sixth century AD. King David is depicted as Orpheus, surrounded by the animals he has attracted to him with is voice. It is clear that the iconography of the singer whose music pacifies those who hear him is perfectly adapted to the representation of David, whose music cured the mad soul of King Saul (I Sm. 16:23). The Orpheus myth furthermore occasions the depiction together of various animals who, in listening to the musician’s song, forget their natural enemity and live together in harmony — an image characteristic of the Golden Age that the prophets announced would witness to the restoration of David’s kingdom (Is. 9:1-11). This kind of appropriation of Greek myths to Jewish contexts can be hard to distinguish from flee-flowing mutual syncretistic exchange. In fact, the following discussion of Christian encounters with Orphic tradition expands the problems raised by the Jewish evidence.

Yahweh and the Gods and Goddesses of Canaan
by John Day
pp. 66-67

So far as the above biblical names are concerned, we cannot be certain whether they simply allude to the Canaanite god Baal, or refer to Yahweh as being equated with Baal, or are simply an epithet “Lord’ for Yahweh without actual identification with the god Baal. Whatever the case with the above names (and the same explanation need not apply to Jerubbaal and the others), we have definite evidence that Yahweh could be referred to as Baal from the personal names Bealiah (2 Chron. 12.6 [ET 5]), one of David’s warriors, and Yehobaal, a name found on a seal,” which seem to mean respectively “Baal is Yahweh and “Yahweh is Baal’. That Yahweh could actually be equated with Baal is clearly indicated by Hosea 2.

In v. 18 (ET 16) Hosea declares, “And in that day, says the Lord, you will call me “My husband”, and no longer will you call me “My Baal”. The following verse goes on to say, ‘For I will remove the names of the Baals from your mouth, and they shall be mentioned by name no more’. Now the Baals were mentioned earlier in this chapter in v. 15 (ET 13), and these clearly refer to the fertility deity, Baal, whom the people regarded as being responsible for the grain, wine, oil and so on in v. 10 (ET 8), and also the ‘lovers of v. 7 (ET 5). From all this it can hardly be doubted that Hosea was not simply objecting to the epithet “Lord’ (ba’al) being applied to Yahweh, but was countering a tendency of the people to conflate Yahweh and Baal to such an extent that the essential identity and uniqueness of the former was compromised.

Further evidence in support of the view there were some who equated Yahweh with Baal derives from the fact that such a hypothesis has explanatory power in accounting for the rise of the Son of Man imagery in Daniel 7.”

p. 70

Since the Baal promoted by Jezebel was the same Baal who had been worshipped by the Canaanite population of Israel and syncretistic Israelites, it can readily be understood how he gained such a large following. This would not be the case with Melqart, the city god of Tyre, and, as M.J. Mulder has emphasized, Ahab would have commit ted political suicide had he attempted to promote such a foreign god.

p. 74

That the name Baal-Zebul was known to the Jews is attested in the New Testament, where Beelzebul has become the name of the Prince of the Demons, Satan (Mt. 10.25, 12.24, 27; Mk 3.22; Lk, 11.15, 18–19). The reading Beelzebul in the New Testament is certainly original: almost all the Greek manuscripts read Beek e3ot). Only Vaticanus (B) and, in every case except one, Sinaiticus (8) read Beef effou%. The reading Beelzebub is found later in the Vulgate and Peshitta, and is clearly inferior, making the New Testament demonic name agree with the god of Ekron in 2 Kings 1. It is all the more remarkable that the form Beelzebul is attested in the New Testament when we reflect that it is not found in the Old Testament, and it testifies to the continuation of a Canaanite numen in transformed demonized form in popular Jewish religion at a late date.

It is not surprising that the name became a term for the ‘Prince of the Demons’ (cf. zbl, prince’): the name of the leading god, when abomi nated, naturally became transformed into that of the leading demon. The idea that pagan gods are demons is found in Deut. 32.17; Ps. 105.37; Bar. 4.7 and Ps. 95.5 (LXX); also in 1 Cor. 10.20 and Rev. 9.20.

The Early History of God
by Mark S. Smith
Kindle Locations 1553-1682

According to Philo of Byblos (PE 1.10.7), beelsamen was a storm-god, associated with the sun in the heavens and equated with Zeus,305 although Baal Shamem’s solar characteristic apparently was a later product.306 That Baal Shamem and not Melqart was the patron god of Ahab and Jezebel may be inferred from the proper names attested for the Tyrian royal family. The onomasticon of the Tyrian royal house bears no names with Melqart. There is only one exception to *b‘l as the theophoric element in royal proper names from Tyre.307

That Baal Shamem and not Melqart was a threat in Israel in the pre-exilic period might be inferred from the fact that the god in question is called “the baal” (1 Kings 18:19, 22, 25, 26, 40). The invocation of Baal Shamem in the Aramaic version of Psalm 20 written in Demotic may also provide evidence of this god in Israelite religion.308 This version of Psalm 20 belongs to a papyrus dating to the second century known as Papyrus Amherst Egyptian no. 63 (column XI, lines 11-19). The text, which may have come from Edfu, shows some Egyptian influence, specifically the mention of the god Horus. The text may secondarily reflect genuine Israelite features. M. Weinfeld argues that the psalm was originally Canaanite or northern Israelite.309 For Weinfeld, the references to Baal Shamem, El-Bethel, and Mount Saphon reflect an original Canaanite or northern Israelite setting, perhaps Bethel. The biblical version of Psalm 20 would reflect a southern version, which secondarily imported the psalm into the cult of Yahweh. In this case, the Aramaic version may have derived from a northern Israelite predecessor. If so, the reference to Baal Shamem might reflect the impact of this god in Israelite religion.

Some scholars identify the baal of Jezebel with the baal of Carmel, perhaps as his local manifestation at Carmel.310 Like Baal Shamem, the baal of Carmel appears to be a storm-god. A second-century inscription from Carmel on a statue identifies the god of Carmel as Zeus Heliopolis.311 At Baalbek, Zeus Heliopolis had both solar and storm characteristics. According to Macrobius (Saturnalia 1.23.19), this Zeus Heliopolis was a solarized form of the Assyrian storm-god, Adad.312 As with Baal Shamem, the solar characteristic of Adad is a secondary development. Macrobius (Saturnalia 1.23.10) identifies the cult of Zeus Heliopolis with a solarized worship of Jupiter. […]

In sum, the biblical evidence suggests that the Phoenician baal of Ahab and Jezebel was a storm-god. The extrabiblical evidence indicates that the baal of Carmel and Baal Shamem were also storm-gods, whereas Melqart does not appear to have been a storm-god. From the available data, following O. Eissfeldt, Baal Shamem was the baal of Jezebel.

Some reason for the adoption of the Phoenician baal by the northern monarchy may be tentatively suggested. The coexistence of cult to Yahweh and Baal prior to and up to the ninth century may have suggested to Ahab and his successors that elevating Baal in Israel would not represent a radical innovation. Ahab’s religious policies presumably would have appealed to those “Canaanites” living in Israelite cities during the monarchy, if these “Canaanites” represent a historical witness to those descendents of the old Canaanite cities that the Israelites are said not to have held originally (Josh. 16:10; 17:12-13; Judg. 1:27-35);314 however, this witness is difficult to assess for historical value. The religious program of Ahab and Jezebel represented a theopolitical vision in continuity with the traditional compatibility of Yahweh and Baal. Up to this time both Yahweh and Baal had cults in the northern kingdom. Whereas Yahweh was the main god of the northern kingdom and divine patron of the royal dynasty in the north, Baal also enjoyed cultic devotion. Ahab and Jezebel perhaps created a different theopolitical vision. While the cult of Yahweh continued in the northern kingdom, Baal perhaps was elevated as the patron god of the northern monarchy, thus creating some sort of theopolitical unity between the kingdom of the north and the city of Tyre. […]

According to historical sources, support for Baal was severely ruptured at this juncture in Israelite history. Jehu managed the slaughter of Baal’s royal and prophetic supporters and the destruction of the Baal temple in Samaria (2 Kings 10), and Jehoiada the priest oversaw the death of Athaliah and the destruction of another temple of Baal (2 Kings 11). Jehu’s reform was not as systematic as the texts might suggest, however. Jehu did not fully eradicate Baal worship.316 Confirmation for this viewpoint comes from inscriptional and biblical sources. The Kuntillet ‘Ajrûd inscriptions contain the names of Baal and Yahweh in the same group of texts. Dismissing such attestations to the god Baal because the script may be “Phoenician” appears injudicious.317 Indeed, the texts bear “vowel letters” (or matres lectionis),318 which constitute a writing convention found in Hebrew, but not in Phoenician. Unlike Hebrew, Phoenician does not use letters to mark vowels.319

References in Hosea to “the baal” (2:10 [E 8]; 2:18 [E 16]; 13:1; cf. 7:16) and “the baals” (2:15 [E 13]; 2:19 [E 17]; 11:2) add further evidence of Baal worship in the northern kingdom. Hosea 2:16 (E 18) begins a section that recalls imagery especially reminiscent of Baal. According to some scholars,320 Hosea 2:18 (E 16) plays on ba‘al as a title of Yahweh and indicates that some northern Israelites did not distinguish between Yahweh and Baal. The verse declares, “And in that day, says Yahweh, you will call me, ‘My husband,’ and no longer will you call me, ‘My ba‘al.’”321 The substitution of Yahweh for Baal continues dramatically in Hosea 2:23-24 (E 21-23). These verses echo Baal’s message to Anat in KTU 1.3 III 13-31 (cf. 1.3 IV 7-20). In this speech, Baal announces to Anat that the word that he understands will be revealed to humanity who does not yet know it. In the context of the narrative, this word is the message of the cosmic fertility that will occur when Baal’s palace is built on his home on Mount Sapan. Upon the completion of his palace, Baal creates his meteorological manifestation of the storm from the palace, which issues in cosmic blessing (KTU 1.4 V-VII). Part of the message to Anat describes the cosmic communication between the Heavens and the Deeps, an image for cosmic fertility […]

Despite royal attempts at reform, Baal worship continued. Although Jehoram, the son of Ahab, undertook a program of reform (2 Kings 3:2) and Athaliah and Mattan, the priest of Baal, were murdered (2 Kings 11:18), royal devotion to Baal persisted. Ahaz fostered Baal worship (2 Chron. 28:2). According to Jeremiah 23:13, Baal worship led to the fall of Samaria and the northern kingdom. The verse declares, “And among the prophets of Samaria I saw an unsavory thing; they prophesied by Baal and led astray my people, Israel.” Jeremiah 23:27 further condemns Israelite prophecy by Baal. Hezekiah sought to eliminate worship of Baal, but his son, Manasseh, rendered royal support to his cult (2 Kings 21:3; 2 Chron. 33:3). Finally, Josiah purged the Jerusalem temple of cultic paraphernalia designed for Baal (2 Kings 23:4; cf. Zeph. 1:4). Prophetic polemic from the end of the southern kingdom also claims that the monarchy permitted religious devotion to Baal down to its final days (Jer. 2:8; 7:9; 9:13; 12:16). From the cumulative evidence it appears that on the whole Baal was an accepted Israelite god, that criticism of his cult began in the ninth or eighth century, and that despite prophetic and Deuteronomistic criticism, this god remained popular through the end of the southern kingdom. There is no evidence that prior to the ninth century Baal was considered a major threat to the cult of Yahweh. […]

The descriptions of Baal and baals in 1 Kings 17-19, Hosea 2, and other biblical texts raise a final issue concerning Baal’s character in ancient Israel. In the Ugaritic sources Baal’s meteorological manifestations are expressions of his martial power. In contrast, 1 Kings 17-19 and Hosea 2 deplore belief in Baal’s ability to produce rains, but these and other biblical passages are silent on the martial import of his manifestation. Indeed, no biblical text expresses ideas about Baal’s status as a warrior. Yahweh had perhaps exhibited and possibly usurped this role at such an early point for the tradents of Israel’s religious literature. This conclusion might be inferred from the numerous similarities between Baal and Yahweh that many scholars have long observed.

The Chosen People
by John Allegro
pp. 19-20

In fact, the names of the patriarchal heroes, as that of the god himself, are non-Semitic, as our recent researches have shown, and go back to the earliest known civilization of the Near East, indeed of the world The language to which we can now trace these names is called Sumerian, and seems to have been the fount of both Semitic and Indo-European and was in use long before these two linguistic families went their separate ways The divine name Yahweh (its purposeful mispronunciation as “Jehovah” was intended to preserve its secrecy from the uninitiated) turns out to have been merely a dialectal form of the Greek god-name Zeus, and both meant “spermatozoa,” the source of all life The very common Semitic word for “god,” El, in its various forms, has a similar connotation, and cognate names such as Ba’al and Hadad, Seba’oth (the Hindu Siva), and the like, refer to the male organ of generation with which the god was mythologically identified He was envisaged as a mighty penis in the heavens which, in the thunderous climax of the storm, ejaculated semen upon the furrows of mother Earth, the womb of creation.

Israel on the steppes of north Arabia before she became tainted with the gross perversions of agricultural fertility cults The whole concept of the desert god owed more to the efforts of later theologians to historicize their mythology than to any authentic tribal memories of Israel’s early experiences We are now able to pinpoint the source of the patriarchal myths in a particular form of the fertility religion, centered on the cult of the sacred mushroom Its devotees conceived the fungus as a manifestation of the phallic god, and believed that eating it brought them into a direct relationship with the deity and enabled them to share in the heavenly secrets The cap of the mushroom, the Amanita muscaria (see below, Chapter 16), contains a hallucinatory drug which imparts a sense of euphoria, coupled with violent physical energy, followed by periods of acute depression This cult was as old as the name Yahweh, and similarly derived from the Sumerians

p. 51

However, much more than fiscal measures was required to weld this heterogeneous collection of peoples together into an organic whole There had to be an emotional rallying point, overriding all other allegiances, ethnic, even familial It could only be religiously inspired, and at its center must be a supranational god, a single deity to whose creative acts was owing all life, on earth or in the heavens, and to whom was thus due the homage of all men The traumatic effects of the Exile upon the minds of the intellectual elite of Judah had already produced the monotheistic ideal of Second Isaiah (Ch 40-55), who uncompromisingly identified the tribal god Yahweh with the sovereign Lord of all history In fact, as we may now appreciate, he was doing no more than perceive in the deity the fertility concept that had been implicit in his name, “Sperm of life,” from the beginning Yahweh, like his exact philological counterpart, the Greek Zeus, and his semantic equivalent, El, chief god of the Semites, was always the one Creator God, the source of all life, and only secondarily appropriated as a tribal deity The Jewish philosopher, wrenched from his homeland by a foreign conqueror, was forced to project his understanding of Yahweh’s dealings with his people against a back cloth of world politics If Israel’s god had any reality at all, he must be able to act over a far wider area than Palestine, and to be able to demand the allegiance of many more peoples than the Jews and the Canaanites The prophet saw Yahweh as a cosmic deity, lord of the heavenly hosts and forces of nature, but at the same time still the special god of Israel, a tribal deity whose main interest was the welfare of his Chosen People Thus it followed that whatever the grand strategy in the Creator’s mind, it involved the destiny of the Jews, and all history was directed to their glorification.

p. 75

For twenty-five years this “Acra,” as it was called, peopled not only by Seleucid troops but other pagans and Hellenized Jews, became virtually the new Greek Jerusalem, a polis, with the Temple serving as the city shrine like that of any other Greek center The “godless” people who inhabited the Acra, led apparently by Menelaus and his friends, were clearly intent on a complete integration with their Greek neighbors The exclusive nature of the Yahwistic cult was to be broken and the tribal god identified with the Greek Zeus.

As we saw, this association between the gods was, in fact, perfectly historical and legitimate Zeus was, indeed, Yahweh in origin; both names meant the same, “seed of life,” or spermatozoa, and both had their common origin in the underlying fertility religion of the ancient Near East Just how far this fact may still have been recognized in popular tradition, even as late as the second century BC, we cannot know Our records were composed by writers utterly hostile to this synthesis.

pp. 78-79

The climax in the Greek campaign to amalgamate the Jewish and pagan gods came, according to our sources, with the introduction of the so-called “Abomination of Desolation” into the Temple, in December 167 BC (Dan 9:27; 11:31; 12:11; I Macc 1:54) The strange Semitic phrase is customarily explained as a pun on the cultic title “Ba’al of Heaven,” designating the ancient Semitic storm god Hadad, with whom Zeus (Jupiter) Olympius had been already identified (cp II Macc 6:2) Much the same syncretism had already been carried out with local approval in the Samaritan Yahwistic Temple on Mount Gerizim, where the god was identified with Zeus Xenius, “Defender of Strangers” (II Macc 6:2) Again, historically, this identification of the storm deity with Yahweh and Zeus was perfectly correct The Jewish god’s accompanying title, Sebaoth, we may now recognize as having originally meant “Penis of the Storm,” reflecting the ancient conception of the phallic deity as a mighty organ in the heavens ejaculating the precious Yahweh/Zeus spermatozoa in his tempestuous orgasm The idea is accurately conveyed in the Semitic divine name Hadad, derived from a Sumerian term for “Mighty Father.”

Whatever the form (probably phallic) in which Zeus Olympius was represented “upon the altar” (I Macc 1:54), it was certainly placed there with the active support of Menelaus and the priestly hierarchy of the Temple, and was doubtless as popular among the laity as were the incense altars “at the doors of their houses, and in the streets” (I Macc 1:55) Furthermore, it is difficult to believe that these cultic “abominations” were instituted overnight; Hellenism had long before made deep inroads into Jewish ideas and practices, and, in any case, many aspects of Greek religion would have found their echoes in the old Israelite fertility worship, which never lay far beneath the surface of the Jewish consciousness. Thus, the worship of Bacchus, in which the Jews joined, carrying the ivy covered thyrsus (II Macc 6:7), was again only another aspect of the ancient fertility cult, on which our recent studies of the religion of the Sacred Mushroom have cast much new light.

Suns of God
by Acharya S (D. M. Murdock)
pp. 109-110

The Hercules myth also makes it into the Old Testament, in the tale of Samson, whose name means “sun” and is the same as Shams-on, Shamash, Shamas, Samas and Saman. Both solar heroes are depicted with their gates or pillars, those of Hercules at Gadiz, while Samson’s were at Gaza. Each is associated with lion killing, and each was taken prisoner but breaks free as he is about to be sacrificed, killing his enslavers in the process. In the end, the Hercules/Samson myths are astrotheological, with the pillars representing solar symbols: “The two pillars…are simply ancient symbol-limits of the course f the sun in the heavens…” “Now just as Samson in one story carries the pillars, so did Herakles…. And in ancient art he was actually represented carrying the two pillars in such a way under his arms that they formed exactly a cross…” Like many others, “St. Augustine believed that Samson and the sun god Herakles were one.”

In addition, the Palestinian term “Simon,” “Semo” or “Sem” is likewise a name for the sun god Shamash/Shemesh, who, like Hercules, has been equated with the Canaanite/Phoenician god Baal. This god “Semo or Semon was especially worshipped in Samaria,” also known as the “Cyrenian Saman,” who is evidently the character traditionally represented among early Christians and Gnostics as “Simon of Cyrene” who legendarily bore Christ’s cross. Interestingly, the Cyrenians were some of the earliest proselytizers of Christianity (Acts 11:20). In Hebrew “Sem” or “Shem” means “name,” which is the term pious Jews use to address Yahweh, the latter being one of the ineffable, unspoken names of God. As “Sem” or “Shem” is a name for the God Sun, so is Yahweh; it is apparent that “Sem” is the northern kingdom version of Yahweh, whence come “Semites” and “Samaritans.” Indeed, the “early Israelites were mostly sun worshippers,” as the fables concerning “Moses, Joshua, Jonah, and other biblical characters are solar myths.”

pp. 116-117

When comparative mythology is studied, the precedent for Christianity becomes evident in numerous “Pagan” cultures. So to is the astrotheological religion present in Judaism, the other predecessor of Christianity. Although Judaism is today primarily a lunar creed, based on a lunar calendar, as a result of the nomadic nature of its early tribal proponents, the religion of the ancient Hebrews and Israelites was polytheistic, incorporating the solar mythos as well. The desert-nomad tribes that Judaism came to comprise were essentially moon worshipping or nigh-sky people, but they eventually took on the solar religion as they came to be more settled. This astrotheological development is reflected in the use of different calendars: For example, the Dead Sea scrolls contained a solar calendar, as opposed to the luni-solar calendar used by the rabbis. The Dead Sea collection also contained treatises on the relation of the moon to the signs of the zodiac, such as the “Brontologion” (4Q318).

The polytheism of the Israelites is reflected in a number of scriptures, including Jeremiah 11:13-14, wherein the writer laments:

For your gods have become as many as your cities, O Judah; and as many as the streets of Jerusalem are the altars you have set up to shame, altars to burn incense to Baal.

The word for “gods” in this passage is Elohim, which is regularly translated as “God” when referring to the Jewish “Lord.” The singular form of “Elohim” is “Eloah” or “Eloh,” used 57 times in the Bible, to indicate both “God” and “false god.” “Baal,” used over 80 times in the Old Testament, means “lord” and represents the sun god worshipped by the Isreaelites, Canaanites, Phoenicians and throughout the Levant. It is noteworthy that at this late date when “Jeremiah” was written (c. 625-565 BCE), the Jews were still polytheistic, as they had been for centuries prior.

This polytheism is further demonstrated in a confused passage at Psalms 94:7, which states: “…and they say, ‘The Lord does not see; the God of Jacob does not perceive.'” (RSV) The word translated as “The Lord” is actually “Jah,” whle the “God” of Jacob is “Elohim” or gods. A better translation would be “…Jah does not see; Jacob’s Elohim do not perceive.” This Jah is the IAO of the Egyptians, while the Elohim are the multiple Canaanite deities. According to Dr. Parkhurst and others, the Elohim of the Israelites referred to the seven planetary bodies known and revered by the ancients. These seven Elohim are also the seven powerful Cabiri of the Phoenicians and Egyptians, one of who was “Axieros,” whom Fourmont identified as the biblical Isaac. The Elohim and polytheism of the Hebrews are dealt with extensively in The Christ Conspiracy and elsewhere. In any event, the worship of the Hebrew, Israelites, and Jews long ago before the Christian era was both polytheistic and astrotheological, the same as that of their so-called Pagan neighbors.

Did Moses Exist?
by D. M. Murdock (Acharya S.)
Kindle Locations 791-798

In our quest, we must keep in mind the syncretism or merging together of divine figures, such as these various lawgivers, practiced not only by pagans with their numerous gods and goddesses but also by Jews. Regarding the Greco-Roman period (332 BCE– 284 AD/ CE), for example, British New Testament scholar Dr. Ralph P. Martin (1925– 2013) and American theologian Rev. Dr. Peter H. Davids state:

Nowhere is syncretism illustrated more clearly than in the magical and astrological beliefs of the era. In this realm, power takes precedence over personality. Commitment to one deity or fidelity to one cult gives way to rituals of power that work. Thus many gods and goddesses could be invoked at the same time by one person. Yahweh (or Iao) could be invoked in the same breath as Artemis and Hekate. Palestinian and diaspora Jews participated in this form of syncretism. Numerous Jewish magical amulets, spells and astrological documents attest to the prevalence of syncretistic Jewish magic.

Kindle Locations 6114-6165

Summarizing the works by some of these ancient writers, Israeli scholar Dr. Abraham Schalit (1898– 1979) remarks:

The non-Jews of Alexandria and Rome alleged that the cult of Dionysus was widespread among Jews. Plutarch gives a Bacchanalian interpretation to the Feast of Tabernacles… According to Plutarch the subject of the connection between the Dionysian and Jewish cults was raised during a symposium held at Aidepsos in Euboea, with a certain Moiragenes linking the Jewish Sabbath with the cult of Bacchus, because “even now many people call the Bacchi ‘Sabboi’ and call out that word when they perform the orgies of Bacchus.” Tacitus too thought that Jews served the god Liber, i.e., Bacchus-Dionysus, but “whereas the festival of Liber is joyful, the Jewish festival of Liber is sordid and absurd.” According to Pliny, Beth-Shean was founded by Dionysus after he had buried his wet nurse Nysa in its soil. His intention was to enlarge the area of the grave, which he surrounded with a city wall, although there were as yet no inhabitants. Then the god chose the Scythians from among his companions, and in order to encourage them, honored them by calling the new city Scythopolis after them (Pliny, Natural History 5: 18, 74).

An inscription found at Beth-Shean dating from the time of Marcus Aurelius [121– 180 AD/ CE] mentions that Dionysus was honored there as ktistes [founder]. Stephen of Byzantium reports a legend that connects the founding of the city of Rafa also with Dionysus (for the Dionysian foundation legends of cities in the region, see Lichtenberger’s study). It is wrong to assume as some do that Plutarch took his account of the festival of Tabernacles from an antisemitic source, for despite all the woeful ignorance in his account it contains no accusation against, or abuse of, the Jews.

It is more likely that Plutarch described the festival of Tabernacles from observation, interpreting it in accordance with his own philosophical outlook, which does not prevent him, however, from introducing into it features of the cult of the famous Temple of Jerusalem gleaned by him in his wide reading. The description as a whole, however, is of Tabernacles as it was celebrated in the Greek diaspora at the end of the first and the beginning of the second century C.E., and not as it was celebrated in the Temple, which had already been destroyed for more than a generation. The festival undoubtedly absorbed influences from the environment, so that Plutarch could indeed have witnessed what he recognized as customs of the Dionysian feast. 870

In view of what we have seen and will continue to see here, we submit that Plutarch’s account is not “woefully ignorant” and that the influence of Dionysianism on Jewish religion began before the First Temple period, including among the Amoritish proto-Israelites who eventually settled the hill country.

Beth Shean

The important ancient town of Beth Shean or Beit She’an (Bethshan, Βαιθσάν, Βεθσάνη)— meaning “house of tranquility”— was called “Scythopolis” in Greek and supposedly was founded by Dionysus. Beth Shean is referred to several times in the biblical books of 1 and 2 Samuel, as well as in Judges and others, and is located strategically in the fertile Jordan Valley, south of the Sea of Galilee and east of the Samarian hill country. Situated at the juncture between the Jordan and Jezreel Valleys, this region is also deemed the “West Bank” of the Jordan River. It is noteworthy that one of the area’s largest winepresses was found at Jezreel, one of many such devices in ancient Israel. 871

The Scythopolis/ Beth Shean region began to be occupied from at least the fourth millennium BCE, with settlements in the third millennium onward, until an earthquake destroyed it in the Early Arab period (749 AD/ CE).

In the Late Bronze Age (15th– 12th cents. BCE), Beth Shean was an Egyptian administrative center, followed by a Canaanite city (12th– 11th cents. BCE) and then an Israelite settlement (10th cent.– 732 BCE). During this time, the people worshipped many different gods, including those of the Canaanites, Egyptians, Greeks and Philistines. A stele from the era of pharaoh Seti I mentions Egypt’s victory over the neighboring hill tribes, among whom were the Hapiru. 872

Grapevine cultivation in the Beth Shean area apparently began during the fourth millennium BCE, 873 and it may be suggested that the vine and wine cult existed in the region long before the Israelites arrived or emerged. As noted, Greek occupation of Asia Minor to the northwest began by 1200 BCE, leaving several centuries between that time and when the Pentateuch emerges clearly in the historical record.

Therefore, it is probable that the rituals of the Jews during the time of Diodorus and Plutarch derived from many centuries before, with influence from other cultures over the centuries that the area was occupied. This influence, of course, would extend to peculiarities of the Dionysian cultus as developed hellenically. So entrenched was the city’s association with Bacchus, in fact, that Pliny the Elder (23– 79 AD/ CE) equated Beth Shean/ Scythopolis with Nysa, “so named of Father Liber, because his nurse was buried there.” 874

Thus, it should not surprise us if the town was “founded” by the archaic wine god and if the Jewish fertility and harvest festival comprised many elements of Bacchic religion, possibly absorbed during the occupation of Beth Shean by Israelites. Other cities, such as Rafa, Rafah or Raphia (Egyptian Rph) in southern Israel/ Palestine on the border of Egypt, were claimed also, as by Stephanus of Byzantium (fl. 6th cent. AD/ CE), to have been founded by the wine god.

Kindle Locations 6536-6542

Regarding 2 Maccabees and the ancient association of Yahweh with the gods of other cultures such as Zeus or Jupiter, American New Testament professor Dr. Sean M. McDonough remarks:

An even more common identification, however, was Dionysus. Tacitus (Hist. 5.5: 5), Lydus (De Mensibus 4: 53), and Cornelius Labeo (ap. Macrobius, Saturnalia 1: 18: 18– 21) all make this association, and a coin from 55 BCE of the curule aedile A. Plautius shows a kneeling king who is labeled BACCHIVS IVDAEVS. E. Babelon argues that this must be the high priest, “the priest of the Jewish Bacchus.” This identification may have been based on more than mere speculation. According to 2 Macc. 6: 7, the Jews “were compelled to walk in the procession in honor of Dionysus, wearing wreaths of ivy”…

Kindle Locations 6841-6856

It is obvious not only that Jews were well aware of Bacchus but also that they revered his cult enough to feature him prominently, according to Maccabees, as well as Plutarch’s statements and the depiction of Dionysus’s life-cycle in ancient mosaics in Israel.

Indeed, the presence of Dionysus on mosaics from the third to fourth centuries AD/ CE in the finely appointed home of the apparent Jewish patriarch at Sepphoris or Tzippori, a village in Galilee, lends weight to Plutarch’s commentary. 1022 Significantly, this imagery depicts Bacchus and Herakles in a wine-drinking contest, which Dionysus wins, a theme flagrantly featured in the prominent Jewish citizen’s home. Since Herakles was a favorite of the Phoenicians, this symbolism could reflect the defeat of that faction commercially, in the wine trade. This central place for Bacchus indicates the wealth of the community depended significantly on the blessings of the grape.

If these later Jews were aware of Dionysus and unflinchingly revered him, it is reasonable to suggest that Israelites knew about his worship and myth in more remote antiquity, particularly as they became wine connoisseurs, a trade that dates back 3,000 years in the hill country where they emerged.

It is very significant that this site of Bacchus worship, Sepphoris, was deemed the Cana of the New Testament, where Jesus was said to have produced his water-to-wine miracle. 1023 It is clear that the gospel writers were imitating the popular Dionysus worship with the newly created Christ character.

Kindle Locations 7215-7228

The Greek god Dionysus’s worship extends back at least 3,200 years, but the reverence for a wine deity in general is much older. Extant ancient texts describing Bacchus’s myth date from the 10th century BCE to the fifth century AD/ CE. For many centuries since antiquity, scholars, theologians and others have noted numerous parallels between Dionysus and Moses, most attempting to establish biblical priority but some declaring that the former post-dated and was derived from the latter.

We have seen that important aspects of Bacchus’s life, described consistently for centuries dating back to the 10th century BCE at the latest, correspond to that of the Israelite lawgiver. Also discussed is the contention by Plutarch that the Jews practiced Bacchic rituals and that Diodorus equated the Jewish god with Dionysus, a reverence evident from Dionysian artifacts such as mosaics in at least one house of a wealthy and powerful Jew.

Since it appears that the Moses character was not created until sometime during or after the Babylonian exile, possibly with his myth in the Pentateuch not taking its final biblical form until the third century BCE, it is conceivable that Bacchic ideas from the Greek historians and poets prior to that time, such as Homer, Hesiod, Herodotus, Euripides and many others, were incorporated directly in the biblical myth. It is also possible that the framers of the Moses myth were aware of the Dionysian myths because they had been written into plays performed around the Mediterranean for centuries. The story of Bacchus in particular would have been well known enough not to need to rely on the texts directly; hence, the Dionysus-Moses connection could have been made early.

Kindle Locations 9403-9429

Sabaoth

The theonym Iao was used popularly in the magical papyri and other artifacts of the first centuries surrounding the common era: “The name Iao also appears on a number of magical texts, inscriptions and amulets from the ancient world.” 1545 These artifacts include an amulet from the first century BCE that reads: “IAO IAO SABAOTH ADONAI.” 1546 This sort of invocation indicates a Semitic origin but passes seamlessly into the formalized Gnosticism of the second century AD/ CE onwards.

In the New Testament, the word Σαβαώθ Sabaoth is used twice, at Romans 9: 29 and James 5: 4. Strong’s defines the term as: “Lord of the armies of Israel, as those who are under the leadership and protection of Jehovah maintain his cause in war.” 1547 The title “Sabaoth” derives from the Hebrew root צבא tsaba’, which is defined as “hosts,” as in both warfare and heaven. In its astral connotation, צבא tsaba’ means “host (of angels)… of sun, moon, and stars… of whole creation.” 1548 Hence, we see an astrotheological theme in the “host of heavens.”

Concerning the amulets and the YHWH-Iao connection, Classics professor Dr. Campbell Bonner relates:

As to the meaning of Iao, there can be no doubt, especially since the subject was thoroughly investigated by Graf von Baudissin; and, in fact, the combination of Ιαω Σαβαωθ Αδωναι [Iao Sabaoth Adonai] “JHVH of hosts, Lord,” which is common on both amulets and papyri, is convincing in itself. 1549

As noted, “Sabaoth” may be related to “Sabeus,” which in turn is an epithet of Dionysus, who is also equated with Iao by Macrobius. Thus, Yahweh is Iao is Bacchus, and all are the sun.

The Sun

To reiterate, Iao was identified with the sun, as in the mysteries and the oracle of Apollo at Claros. Macrobius (1.18.20) relates that Iao was “supreme god among all,” represented by the wintry Hades, the vernal Zeus, the summery Helios and the autumnal Iao, 1550 also noted as Iacchus or Dionysus, the latter’s role as the sun in the fall appropriate for a wine god.

As can be seen from ancient testimony, and as related by Dr. Roelof van den Broek, a professor of Christian History at the University of Utrecht: “Iao stood for the Sun.” 1551

Adon-Adonai-Adonis

Once again, both Plutarch (Quaest. Conv.) and Macrobius (4th cent. AD/ CE) identified the solar Iao with Bacchus, who in turn was equated by Diodorus with Yahweh. Plutarch (Symp. 5.3) also associates Bacchus with Adonis.

Did God Have a Wife?
by William G. Dever
Kindle Locations 2668-2800

Thus in the biblical writers’ view, from Moses to Ezekiel – 60o years, Israel’s entire history in Canaan – folk religion is bound up with rites having to do with “green trees;’ rites prohibited, yet practiced nonetheless. Why the biblical writers’ obsession with trees? It seems pretty obvious: a luxuriant green tree represents the goddess Asherah, who gives life in a barren land. (Those of us who have lived in the Arizona desert appreciate why trees seem miraculous.) And on the ridges and hilltops, where one seems closer to the gods and can lift up one’s eyes to the heavens, the trees and groups of wooden poles erected to her added to the verdant setting and the ambiance of luxuriousness, of plenty.

Such “hilltop shrines” with groves of trees are well known throughout out the Mediterranean world in the Bronze and Iron Ages, and they continued to flourish clear into the Classical era. Why should ancient Israel not have participated in this universal oriental culture of “fertility religions,” which celebrated the rejuvenation and sustaining powers of Nature? Perhaps Israel’s only unique contribution was to see over time that Nature is subsumed under Yahweh, “Lord of the Universe,” whose power ultimately gives life to humans and beast and field. But that insight was a long time coming, and it was fully realized only in the wisdom gained from the tragedy of the Babylonian captivity (Chapter VIII).

Despite what seems to me the transparency of the “tree” motif in connection with Asherah, ancient commentators seem to have been confused, fused, and so were modern scholars until recently. As I have noted above (Chapter IV), the Greek translators of the Hebrew Bible in the 3rd-2nd century B.C. were already sufficiently removed from the Iron Age reality that they did not understand the real meaning of Hebrew ‘aherah. Thus they rendered the term by the Greek word ‘alsos, “grove,” or dendron, “tree.” […]

There is, however, evidence of still another goddess who was venerated by the ancient Judeans. The prophet Ezekiel reports that at the gate of the Temple in Jerusalem there sat “women weeping for Tammuz” (Ezekiel 8:14). “Tammuz” was the later name of the 3rd millennium nium Sumerian god Dumuzi. He was a seasonal “dying and rising” god whose consort was Ishtar (Sumerian Inanna). Like Canaanite Baal in the western Semitic world, Dumuzi died annually in the early summer when the rains ceased, and then he descended into the underworld as though dead. Ishtar mourned his passing, but in the fall she helped to bring him back to life, and they re-consummated their sexual union. Thus Nature was fructified in an unending cycle of love, death, and reunion. The Mesopotamian cult of Tammuz was largely the province of women, who naturally empathize with his “widow” Ishtar, and ritually mourn his passing. ing. There seems little doubt that this pan-Mediterranean seasonal myth of Baal and `Anat, Tammuz and Ishtar, was popular in some circles in Judah, especially after the Assyrian impact in the late 8th century B.C. (Ackerman 1992:79-80).

There is also evidence of other mourning rituals in the Hebrew Bible, for other male deities. In Elijah’s famous contest on Mt. Carmel, the prophets of Baal attempt to call up the dead vegetation deity Baal by ritually ally gashing their flesh (I Kings 18:28), a typical funerary rite. Baal is also known by his other name Hadad, and in Zechariah 12:10, 11 there is a description of “mourning for Hadad-Rimmon in the Valley of Megiddo.” Hosea 7:14 may also refer to the same rites, condemning those who “turn to Baal” (Hebrew uncertain), who “wail upon their beds” and “gash themselves.” selves.”

Before leaving what may seem to be a confusing multiplicity of female male (and male) deities, and the question of which cultic artifacts may relate late to which, let me note one fact that may help. In the eastern Mediterranean world generally, there appear many local deities, both male and female, who were probably conceived of as particular manifestations of the more cosmic high gods. Thus in Canaan, we have texts naming Ba`alat (the feminine counterpart of Baal) “of Byblos.” The male deity Baal appears in Canaanite texts as Baal Zephon, “Ba’al of the North.” Baal appears in the Hebrew Bible as “Ba’al (of) Hazor”; “Ba’al (of) Hermon”; “Ba’al (of) Meon”; “Ba’al (of) Peor”; and “Ba’al (of) Tamar.” In the Kuntillet `Ajrud texts discussed above, we find mention of “Yahweh of Samaria,” and “Yahweh of Teman (Yemen).” Thus a number of scholars have called attention tion to the tendency of the High God or Goddess to appear in the form of the deity of a particular local cult, often with a hyphenated name. This would be a sort of “diffusion” of the deity; but on the other hand, these deities ities could coalesce again under different conditions into a sort of “conflate” flate” deity. The result is often great confusion of names and identities. For instance, a long chain of textual witnesses over time result in the following equation: Baal-Hadad = Baal-Shamen (“of the heavens”) = Zeus Helio = Heliopolitan Zeus. All these names, however, are reflexes of the great West Semitic high god Baal, “Lord of the Heavens/Sun” (the Greek equivalent of Baal with Zeus and helios, “sun,” is transparent). Likewise Canaanite `Anat became Greek Athena, the warlike patron deity of Athens. And Canaanite-Israelite Asherah appears later as Greek Aphrodite and Roman Venus, the latter also goddesses of beauty, love, and sexual pleasure. The similarities are unequivocal: Asherah and Aphrodite are both connected to the sea, and doves are symbols of both. Aphrodite’s lover Adonis clearly preserves the earlier Phoenician-Hebrew word ‘adon, “Lord.”

Of relevance for the female deities worshipped in ancient Israel, we should note the work of my teacher Frank Cross and several of his students. dents. They have argued that the three great goddesses of Ugarit – Asherah, `Anat, and Astarte – are all in effect “hypostatizations” of the cosmic Great Goddess of Canaan, all playing the same role but each perhaps haps venerated in a particular local manifestation, tradition, and cult. We could insist on choosing one – but should we? In Roman Catholic piety, especially among ordinary, unsophisticated worshippers we encounter many “Marys” – “Our Lady of Guadalupe”; “Our Lady of Lourdes”; etc. Are these different “Marys,” or one in many guises? Often folk religion may be universal and timeless; but it is always the here and now that matters. Thus women in ancient Israel were probably addressing their special concerns to the Great Mother of Canaan who lived on in the Iron Age, whether they knew her as “Asherah,” the “Queen of Heaven,” or “Ishtar,” or “Astarte.” I think that most conceived of her as a consort of the male deity Yahweh, but others may have seen her more as simply a personification of Yahweh’s more “feminine” attributes.

Early Judaism
by John J. Collins & Daniel C. Harlow
pp. 20-24

Throughout the period under consideration in this volume, Jews lived in a world permeated by Hellenistic culture. The pervasiveness of Hellenistic influence can be seen even in the Dead Sea Scrolls (where there is little evidence of conscious interaction with the Greek world), for example, in the analogies between the sectarian communities and voluntary associations.

Modern scholarship has often assumed an antagonistic relationship between Hellenism and Judaism. This is due in large part to the received account of the Maccabean Revolt, especially in 2 Maccabees. The revolt was preceded by an attempt to make Jerusalem into a Hellenistic polis. Elias Bickerman (1937) even argued that the persecution was instigated by the Hellenizing high priest Alcimus, and in this he was followed by Martin Hengel (1974). Yet the revolt did not actually break out until the Syrian king, Antiochus IV Epiphanes, had disrupted the Jerusalem cult and given the Temple over to a Syrian garrison. The revolt was not directed against Hellenistic culture but against the policies of the king, especially with regard to the cult. Judas allegedly sent an embassy to Rome and availed himself of the services of one Eupolemus, who was sufficiently proficient in Greek to write an account of Jewish history. The successors of the Maccabees, the Hasmoneans, freely adopted Greek customs and even Greek names. Arnaldo Momigliano wrote that “the penetration of Greek words, customs, and intellectual modes in Judaea during the rule of the Hasmoneans and the following Kingdom of Herod has no limits” (Momigliano 1994: 22; see also Hengel 1989; Levine 1998). Herod established athletic contests in honor of Caesar and built a large amphitheater, and even established Roman-style gladiatorial contests. He also built temples for pagan cults, but not in Jewish territory, and he had to yield to protests by removing trophies, which involved images surrounded by weapons, from the Temple. In all cases where we find resistance to Hellenism in Judea, the issue involves cult or worship (Collins 2005: 21-43). Many aspects of Greek culture, including most obviously the language, were inoffensive. The revolt against Rome was sparked not by cultural conflict but by Roman mismanagement and social tensions.

Because of the extensive Hellenization of Judea, the old distinction between “Palestinian” Judaism and “Hellenistic” (= Diaspora) Judaism has been eroded to a great degree in modern scholarship. Nonetheless, the situation of Jews in the Diaspora was different in degree, as they were a minority in a pagan, Greek-speaking environment, and the Greek language and cultural forms provided their natural means of expression (Gruen 1998, 2002). The Jewish community in Alexandria, the Diaspora community of which we are most fully informed, regarded themselves as akin to the Greeks, in contrast to the Egyptians and other Barbaroi. The Torah was translated into Greek already in the third century B.C.E. Thereafter, Jewish authors experimented with Greek genres — epic, tragedy, Sibylline oracles, philosophical treatises (Goodman in Vermes et al. 1973-1987: 3: 1.470-704; Collins 2000). This considerable literary production reached its apex in the voluminous work of the philosopher Philo in the early first century C.E. This Greco-Jewish literature has often been categorized as apologetic, on the assumption that it was addressed to Gentiles. Since the work of Victor Tcherikover (1956), it is generally recognized that it is rather directed to the Jewish community. Nonetheless, it has a certain apologetic dimension (Collins 2005: 1-20). It is greatly concerned to claim Gentile approval for Judaism. In the Letter of Aristeas, the Ptolemy and his counselors are greatly impressed by the wisdom of the Jewish sages. Aristeas affirms that these people worship the same God that the Greeks know as Zeus, and the roughly contemporary Jewish philosopher Aristobulus affirms that the Greek poets refer to the true God by the same name. The Sibyl praises the Jews alone among the peoples of the earth. Philo, and later Josephus, is at pains to show that Jews exhibit the Greek virtue of philanthrōpia. […]

The story of modern scholarship on early Judaism is largely a story of retrieval. None of the literature of this period was preserved by the rabbis. The Greek literature of the Diaspora may not have been available to them. Much of the apocalyptic literature and of the material in the Dead Sea Scrolls was rejected for ideological reasons. The recovery of this literature in modern times presents us with a very different view of early Judaism than was current in the nineteenth century, and even than more recent accounts that impose a rabbinic paradigm on the period in the interests of normativity.

No doubt, our current picture of early Judaism is also incomplete. Despite the important documentary papyri from the Judean Desert dating to the Bar Kokhba period (Cotton in Oppenheimer, ed. 1999: 221-36), descriptions of the realia of Jewish life still rely heavily on rabbinic sources that are possibly anachronistic. The overdue study of women in this period is a case in point (Ilan 1995). One of the salutary lessons of the Dead Sea Scrolls is that they revealed aspects of Judaism that no one would have predicted before the discovery. And yet this was only the corpus of writings collected by one sect. To do justice to early Judaism we would need similar finds of Pharisaic, Sadducean, and other groups, and further documentary finds similar to those that have shed at least limited light on Egyptian Judaism and on Judah in the Bar Kokhba period.

Aphrodite and the Rabbis
by Burton L. Visotzky
pp. 56-58

There is even a dual – language inscription, first in Hebrew and then in Greek, of the name Rabbi Gamaliel, possibly the same rabbi who was patriarch of the Jewish community. Artistic motifs on the Beth Shearim sarcophagi include the ark or desert tabernacle, palm fronds, and lions (of Judah?)—all commensurate with rabbinic religion. But there are also eagles, bulls, Nike (the goddess of victory), Leda and the swan (aka Zeus), a theater mask, a spear – carrying warrior fragment, and yet other fragments of busts, statues, and bas reliefs of humans, none of which might be considered very “Jewish” by the rabbis of the Talmud. It’s hard to know what to make of this mishmash of pagan and Jewish burial symbols.

Even more confusing, perhaps, is the fact that in a number of synagogues from the Byzantine period that have been unearthed across the Galilee, the mosaics on the floors, most often in the central panels, display a zodiac with the twelve months, depicted in a circle enclosed in a square frame. At each corner of the square is a personification of the season of the year in that quadrant—except for the one mosaic, where the floor guy got the order of the seasons confused and laid them in the wrong corners. I suppose a zodiac is conceivably within the pale, except it has a whiff of paganism about it. But what is truly astonishing about these mosaics is that in the center of the circle in each of these synagogues, there is Zeus – Helios , riding his quadriga (a chariot drawn by four horses) across the floor – bound sky!

To say the least, the god Zeus is unexpected on a synagogue floor, and there is no scholarly consensus whatsoever as to what this possibly can mean about Judaism in Roman Palestine. The quadriga is, however, a fairly popular and perhaps even universal symbol of strength. Above is the famous quadriga atop Berlin’s Brandenburg Gate.

But really, Zeus – Helios riding across the floor of Holy Land synagogues? We’ll discuss this more later. But if we add to this artistic record the Samaritan’s Temple on Mt. Gerizim (near modern Nablus), we must conclude that the overwhelming physical evidence of Judaism, even in Roman and Byzantine Palestine, is decidedly not the Judaism of the Talmudic rabbis.

p. 188

The surprise is in the central panel. Here is a zodiac, complete with Greek mythical figures—including an uncircumcised boy representing the month of Tishrei (Libra). Smack in the middle of the zodiac circle is the divine figure of Zeus – Helios , riding his four – horsed quadriga. Depictions of Helios can also be found in synagogue remains at Na’aran (in the South), at Bet Alpha (also in the Galilee), and at Sepphoris.

pp. 198-205

Given the art we have uncovered in the synagogue there, I must conclude that the Jews of Sepphoris also were comfortable living among their pagan neighbors. […]

Of course, Jewish tradition tells of another great musician and harp player, King David. So we shouldn’t be entirely surprised to see him on the mosaic floor of the early sixth – century CE synagogue on the coast at Gaza, looking remarkably like Orpheus. Just in case you might think it actually is Orpheus, the mosaic has a caption to the right of the Jewish king’s head identifying him in Hebrew as “David.” But he is clearly modeled on Orpheus—his harp is charming a snake, a lioness, and even a giraffe (or maybe a long – necked gazelle).

This brief detour to see King David in Gaza has brought us back from pagan gods and heroes once more to Jewish characters in synagogues. Let’s return to Sepphoris now to take a closer look at the art in the synagogue excavated there. The synagogue dates from the fourth century, and its art is typical: menorahs, palm, and citron (the biblically commanded lulav and etrog, used for the holiday of Sukkot), lions, a shofar, and other biblical horns.

As we walk to the front of the main sanctuary, bordered on either side by the Jewish symbols just mentioned, there is a mosaic panel of the Temple—or maybe it’s a Torah ark? In any case, the doors of that building are topped with a shell shape and bracketed by pillars. This ubiquitous depiction of doors is found in many Roman – era synagogues. But it also is found on a sarcophagus in the Naples Museum, there identified as a Christian resting place. And similar sets of doors can be found outside of religious contexts, at least Jewish or Christian ones.

We already have seen “the doorway” in funerary and synagogue contexts, but it is also found on a wall in Herculaneum, the pagan town that was covered along with Pompeii by the eruption of Mt. Vesuvius in 79 CE. The doorway is flanked by columns on both sides, with the oft – seen shell above the portal. Within the doorway is neither a Torah nor a Temple priest, but two figures: male and female. Most art historians identify them as Poseidon and his wife, Amphitrite. The shell is appropriate for the King of the Sea.

But what can this tell me about the depiction of the shell and the doorway in synagogue art? That type of doorway may be a Torah ark or shrine, since the one depicted in Rome’s Jewish catacomb at Villa Torlonia shows scrolls inside the open doors. It may symbolize God’s house, as it seems to be a portal for the gods in the picture above. But the doorway also may be symbolic of the monumental gates of the Jerusalem Temple. In Jewish Roman art it may even represent the synagogue itself. There are too many options to decide with any assurance what the door is supposed to represent. I would like to think that the one thing the doorway should not represent in synagogue art, however, is a portal for pagan gods. […]

The quadriga and my mention of the Bible brings me right back to the synagogue at Sepphoris and a confusing, complex image there. The central panel of the synagogue floor’s mosaic “carpet” depicts the zodiac, with Zeus – Helios riding his quadriga across the sky as the central focus. The prevalence of the zodiac in synagogue art may indicate an area of divergence between the rabbis of Talmudic circles and the Jews in the synagogue communities of Roman Palestine. The rabbis expressed their stern disapproval of the image, while the Jews in the synagogue seemed to enjoy the motif.

In fact, the zodiac occupies a significant place in the broader Jewish worldview. Each Jewish month is measured by the phases of the moon, visible over its monthly cycle. Given that this is a phenomenon observable in nature, it is not surprising that the months of the Jewish calendar correspond with other cultures’ lunar calendars. Indeed, the rabbis’ calendar borrows the names of its months from Babylonia; and these months are congruent with the signs of the celestial zodiac. However, the rabbis do not believe that astrology rules Jewish fate—the Talmud explicitly rejects this notion when it more than once pronounces: “The astrological signs [Hebrew: mazal ] are not for the Jews.”

Yet in Palestinian synagogue zodiac mosaics, the months are depicted by astrological signs. The roundel of synagogue zodiac wheels, even when they are captioned in Hebrew, depicts those signs. […]

Throughout the ancient world, the sun was the preeminent symbol of daily constancy. The diurnal round of the sun with its warmth and healing power was seen as a benefaction from the gods or from God. In polytheistic pagan cultures, the sun was often seen as a god, Sol Invictus, the invincible sun, also known as Zeus – Helios . Yet anyone who has read the Ten Commandments knows only too well that this is a disturbing, even forbidden, notion. Exodus 20: 3–5 commands:

You shall have no other gods before Me. You shall not make any statue nor any depiction of what is in the heaven above, nor on the earth below, nor in the waters below on the earth. You shall not bow down to them nor worship them, for I, the LORD your God, am a jealous God . . .

When Rabbi Gamaliel made his comment about Aphrodite in the bathhouse, which I recounted to you earlier, he offered Jewish legal parameters for representation of living forms in subsequent Jewish art. We do not represent gods to be worshipped but can represent figures, even human, for aesthetic reasons. Beauty is not forbidden; it is rather encouraged, especially as an offering to God. This is how Gamaliel was able to bathe before that statue of Aphrodite. Even so, the center of the zodiac at the Sepphoris synagogue remains challenging, as it depicts the sun god Helios, riding his heavenly quadriga across the daytime sky. […]

Clearly, the community of the synagogue in Sepphoris was not too worried about the Second Commandment’s prohibition against heavenly bodies, even if Helios was depicted only symbolically. This representation might reflect a tradition in the Babylonian Talmud, where Rabbi Yehoshua ben Hannaniah likened the difficulty of looking directly at the sun to the difficulty of beholding God. So perhaps the orb of the sun in the Sepphoris synagogue mosaic is meant only to represent, but not to picture, God.

In truth, this mosaic is hardly unique. The synagogues in Huseifa and Hammat Tiberias also have zodiacs on their floors. At Hammat, Helios/Sol is not merely an orb, but incarnate. […]

Hammat Tiberias and even Sepphoris/Diocaesarea were Roman imperial cities. So it is possible that the Jews there were more assimilated and so were more comfortable with these pagan symbols. Perhaps the urban communities were just that much more cosmopolitan and laissez – faire about their Jewish practice. But in fact there are also zodiacs in the small town synagogues of Na’aran, near Jericho, and at Beit Alpha, in the Galilee. These are not big urban centers, and while the primitive art of Beit Alpha shows a lack of sophistication, it enthusiastically embraces the Zeus – Helios image. […]

To further complicate our understanding of the images found on these synagogue floors, Helios is invoked in a Jewish prayer, recovered in a quasi – magical liturgical text from the fourth century CE among the manuscripts of the Cairo Geniza, the ancient used – book depository. The prayer is in a manuscript called Sefer HaRazim, the Book of Mysteries. We quoted this prayer above, while discussing Gamaliel’s bath with Aphrodite. Here is the line of Greek, transliterated into Hebrew, which names Helios:

I revere you HELIOS, who rises in the east, the good sailor who keeps faith, the heavenly leader who turns the great celestial wheel, who orders the holiness (of the planets), who rules over the poles, Lord, radiant ruler, who fixes the stars.

The Helios prayer gives us a peek at Greco – Roman Jewish folk religion in Roman Palestine during this period. Perhaps it also sheds light on the Zeus – Helios images on the synagogue floors. Helios, or Sol Invictus, as he was known in Latin, apparently was a revered god, at least by some. He was a pagan god who might have been identified with the One and Only God in the minds of the Jews who beheld him riding across their community’s synagogue floor.

The Helios phenomenon is even more complicated than the Jewish evidence alone allows. The last pagan emperor, Julian, who reigned from 361 to 363, wrote about Helios,

What I am now about to say I consider to be of the greatest importance for all things “That breathe and move upon the earth” and have a share in existence and a reasoning soul and intelligence, but above all others it is of importance to myself. For I am a follower of King Helios . . . the King of the whole universe, who is the center of all things that exist. He, therefore, whether it is right to call him the Supra – Intelligible , or the Idea of Being, and by Being I mean the whole intelligible region, or the One. . . .

In Julian’s “Hymn to King Helios,” we see a pagan praise his god as the One. Julian defines attributes of Helios not unlike those that the rabbis attribute to their one God. To the extent that the Jews who placed the image of Zeus/Helios on the floors of their synagogues knew or agreed with Julian’s theology, the image may have been a convenient pictorial stand – in for God. Some synagogue mosaics depicting biblical stories also show the hand of God reaching down from Heaven. So Helios simply might represent the Jews’ God in these synagogue mosaics.

Snow Crash vs Star Trek

“[C]yberpunk sci-fi of the 1980s and early 1990s accurately predicted a lot about our current world. Our modern society is totally wired and connected, but also totally unequal,” writes Noah Smith (What we didn’t get, Noahpinion). “We are, roughly, living in the world the cyberpunks envisioned.”

I don’t find that surprising. Cyberpunk writers were looking at ongoing trends and extrapolating about the near future. We are living in that near future.

Considering inequality in the US began growing several decades ago when cyberpunk became a genre, it wasn’t hard to imagine that such inequality would continue to grow and play out within technology itself. And the foundations for present technology were developed in the decades before cyberpunk. The broad outlines of the world we now live in could be seen earlier last century.

That isn’t to downplay the predictions made and envisioned. But it puts it into context.

Smith then asks, “What happened? Why did mid-20th-century sci fi whiff so badly? Why didn’t we get the Star Trek future, or the Jetsons future, or the Asimov future?” His answer is that, “Two things happened. First, we ran out of theoretical physics. Second, we ran out of energy.”

That question and answer is premature. We haven’t yet fully entered the Star Trek future. One of the first major events from its future history are the Bell Riots, which happen seven years from now this month, but conditions are supposed to worsen over the years preceding it (i.e., the present). Like the cyberpunk writers, Star Trek predicted an age of growing inequality, poverty, and homelessness. And that is to be followed by international conflict, global nuclear war, and massive decimation of civilization.

World War III will end in 2053. The death toll will be 600 million. Scientific research continues, but it will take decades for civilization to recover. It’s not until the 22nd century that serious space exploration begins. And it’s not until later in that century that the Federation is formed. The Star Trek visionaries weren’t starry-eyed optimists offering much hope to living generations. They made clear that the immediate future was going to be as dark or darker than most cyberpunk fiction.

The utopian world that I watched in the 1990s was from The Next Generation and Deep Space Nine. Those two shows portray the world 250 years from now, about the same distance we have to the last decades of the American colonial era. It’s unsurprising that a pre-revolutionary writer might have predicted the invention of the cotton gin at the end of the 18th century, just as unsurprising that he couldn’t have predicted the world we now live in. That is why I would argue it’s premature to say that no further major advancements in science will be made over that time period.

Scientific discoveries and technological developments tend to happen in spurts — progress builds incrementally, which is what makes Star Trek compelling in how it offers the incremental details of how we might get from here to there. We can be guaranteed that, assuming we survive, future science will seem like magic to us, based as it would be on knowledge we don’t yet possess. At the beginning of the 20th century, there were those who predicted that nothing significant was left for humans to learn and discover. I laugh at anyone who makes the same naive prediction here at the beginning of the 21st century.

To be fair, Smith doesn’t end there. He asks, “These haven’t happened yet, but it’s only been a couple of decades since this sort of futurism became popular. Will we eventually get these things?” And he adds that, “we also don’t really have any idea how to start making these things.”

Well, no one could answer what the world will be like in the distant future any more than anyone in the distant past was able to predict the world that has come to pass. Nothing happens yet, until it happens. And no one really has any idea how to start making anything, until someone figures out how to do so. History is an endless parade of the supposedly impossible becoming possible, the unforeseen becoming commonplace. But it is easy to argue that recent changes have caused a rupture and that even greater changes are to come.

Smith goes on to conjecture that, “maybe it’s the authors at the very beginning of a tech boom, before progress in a particular area really kicks into high gear, who are able to see more clearly where the boom will take us.” Sure. But no one can be certain one is or is not at the beginning of a tech boom. That can only be seen clearly in retrospect.

If the Star Trek future is more or less correct, the coming half century will be the beginning of a new tech boom that leads to the development of warp drive in 2063 (or something akin to it). And so following it will be an era of distant space travel and colonization. That would be the equivalent of my grandparents generation growing up with the first commercially sold cars and by adulthood, a half century later, experiencing the first manned space flight — there being no way to predict the latter from the former.

As a concluding thought, Smith states that, “We’ll never know.” I’m sure many in my grandparents generation said the same thing. Yet they did come to know, as the future came faster than most expected. When that next stage of technological development is in full force, according to Star Trek’s future historians, those born right now will be hitting middle age and those reaching young adulthood now will be in their sixties. Plenty in the present living generations will be around to know what the future holds.

Maybe the world of Snow Crash we seem to be entering into will be the trigger that sends us hurtling toward Star Trek’s World War III and all that comes after. Maybe what seems like an endpoint is just another beginning.

* * *

About predictions, I am amused by early 20th century proclamations that all or most great discoveries and inventions had been achieved. The belief was that the following century would be limited to working out the details and implementing the knowledge they already had.

People at the time had just gone through a period of tumultuous change and it was hard to imagine anything further. Still, it was a time of imagination, when the earliest science fiction was popularized. Most of the science fiction of the time extrapolated from what was known from the industrial age, from Newtonian physics and Darwinian evolution. Even the best predictions of the time couldn’t see that far ahead. And like cyberpunk, some of the predictions that came true in the following decades were dark, such as world war and fighting from the air. Yet it was hard for anyone to see clearly even into the end of the century, much less the century following that.

The world seemed pretty well explained and many felt improvements and progress were hitting up against a wall. So, it would be more of the same from then on. The greater changes foreseen tended toward the social rather than the technological. Otherwise, most of the experts felt certain they had a good grasp of the kind of world they lived in, what was possible and impossible. In retrospect, such confidence is amusing to an extreme degree. The following passage describes the context of that historical moment.

Stranger Than We Can Imagine
by John Higgs
pp. 17-19

It appeared, on the surface, to be an ordered, structured era. The Victorian worldview was supported by four pillars: Monarchy, Church, Empire and Newton.

The pillars seemed solid. The British Empire would, in a few years, cover a quarter of the globe. Despite the humiliation of the Boer War, not many realised how badly the Empire had been wounded and fewer still recognised how soon it would collapse. The position of the Church looked similarly secure, despite the advances of science. The authority of the Bible may have been contradicted by Darwin and advances in geology, but society did not deem it polite to dwell too heavily on such matters. The laws of Newton had been thoroughly tested and the ordered, clockwork universe they described seemed incontrovertible. True, there were a few oddities that science puzzled over. The orbit of Mercury, for instance, was proving to be slightly different to what was expected. And then there was also the issue of the aether.

The aether was a theoretical substance that could be described as the fabric of the universe. It was widely accepted that it must exist. Experiments had shown time and time again that light travelled in a wave. A light wave needs something to travel through, just as an ocean wave needs water and a sound wave needs air. The light waves that travel through space from the sun to the earth must pass through something, and that something would be the aether. The problem was that experiments designed to reveal the aether kept failing to find it. Still, this was not considered a serious setback. What was needed was further work and cleverer experiments. The expectation of the discovery of the aether was similar to that surrounding the Higgs boson in the days before the CERN Large Hadron Collider. Scientific wisdom insisted that it must exist, so it was worth creating more and more expensive experiments to locate it.

Scientists had an air of confidence as the new century began. They had a solid framework of knowledge which would withstand further additions and embellishments. As Lord Kelvin was reputed to have remarked in a 1900 lecture, “there is nothing new to be discovered in physics now. All that remains is more and more precise measurement.” Such views were reasonably common. “The more important fundamental laws and facts of physical science have all been discovered,” wrote the German-American physicist Albert Michelson in 1903, “and these are now so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote.” The astronomer Simon Newcomb is said to have claimed in 1888 that we were “probably nearing the limit of all we can know about astronomy.”

The great German physicist Max Planck had been advised by his lecturer, the marvellously named Philipp von Jolly, not to pursue the study of physics because “almost everything is already discovered, and all that remains is to fill a few unimportant holes.” Planck replied that he had no wish to discover new things, only to understand the known fundamentals of the field better. Perhaps unaware of the old maxim that if you want to make God laugh you tell him your plans, he went on to become a founding father of quantum physics.

Scientists did expect some new discoveries. Maxwell’s work on the electromagnetic spectrum suggested that there were new forms of energy to be found at either end of his scale, but these new energies were still expected to obey his equations. Mendeleev’s periodic table hinted that there were new forms of matter out there somewhere, just waiting to be found and named, but it also promised that these new substances would fit neatly into the periodic table and obey its patterns. Both Pasteur’s germ theories and Darwin’s theory of evolution pointed to the existence of unknown forms of life, but also offered to categorise them when they were found. The scientific discoveries to come, in other words, would be wonderful but not surprising. The body of knowledge of the twentieth century would be like that of the nineteenth, but padded out further.

Between 1895 and 1901 H.G. Wells wrote a string of books including The Time Machine, War of the Worlds, The Invisible Man and The First Men in the Moon. In doing so he laid down the blueprints for science fiction, a new genre of ideas and technological speculation which the twentieth century would take to its heart. In 1901 he wrote Anticipations: An Experiment in Prophecy, a series of articles which attempted to predict the coming years and which served to cement his reputation as the leading futurist of the age. Looking at these essays with the benefit of hindsight, and awkwardly skipping past the extreme racism of certain sections, we see that he was successful in an impressive number of predictions. Wells predicted flying machines, and wars fought in the air. He foresaw trains and cars resulting in populations shifting from the cities to the suburbs. He predicted fascist dictatorships, a world war around 1940, and the European Union. He even predicted greater sexual freedom for men and women, a prophecy that he did his best to confirm by embarking on a great number of extramarital affairs.

But there was a lot that Wells wasn’t able to predict: relativity, nuclear weapons, quantum mechanics, microchips, black holes, postmodernism and so forth. These weren’t so much unforeseen, as unforeseeable. His predictions had much in common with the expectations of the scientific world, in that he extrapolated from what was then known. In the words commonly assigned to the English astrophysicist Sir Arthur Eddington, the universe would prove to be not just stranger than we imagine but, “stranger than we can imagine.”

 

The Moderate Republicans of the Democratic Party

“I don’t know that there are a lot of Cubans or Venezuelans, Americans who believe that. The truth of the matter is that my policies are so mainstream that if I had set the same policies that I had back in the 1980s, I would be considered a moderate Republican.”
~Barack Obama, 2012 interview (via DarkSkintDostoyevsky)

Not just a moderate but a moderate Republican. His argument was that GOP has moved so far right that he is now holding what was once a standard position among Republicans.

This is supported by his having continued Bush era policies, further legalized the War on Terror, and deported more immigrants than any president before, even a higher rate than Trump. His crown achievement was to pass Romneycare healthcare reform that originated from a right-wing think tank, while refusing to consider that most Americans being far to his left were demanding universal healthcare or single payer. Heck, he even expanded gun rights by allowing guns to be carried on federal land.

The unstated implication is, in order to occupy what once was Republican territory, that has involved the Democrats also moving right. But this didn’t begin with Obama. Mick Arran notes that, “In ’92 or 93 Bill Clinton said, in public, on the record, that his admin would be a ‘moderate Republican administration’. It was.” It’s easy to forget how that decade transformed the Democratic Party. This is made clear by E.J. Dionne jr. in 1996 piece from the Washington Post (Clinton Swipes the GOP’s Lyrics):

The president was among the first to broach the notion of Clinton as Republican — albeit more in frustration than pleasure. “Where are all the Democrats?” Clinton cried out at a White House meeting early in his administration, according to “The Agenda,” Bob Woodward’s account of the first part of the Clinton presidency. “I hope you’re all aware we’re all Eisenhower Republicans. We’re Eisenhower Republicans here, and we are fighting the Reagan Republicans. We stand for lower deficits and free trade and the bond market. Isn’t that great?”

To be fair, this shift began much earlier. What we call Reaganomics actually began under Jimmy Carter. This change included ushering in deregulation. From CounterPunch, Chris Macavel writes that (The Missing Link to the Democratic Party’s Pivot to Wall Street):

As eminent historian Arthur Schlesinger Jr., an aide to President Kennedy, posited, Carter was a Democrat in name only; his actions were more characteristically Republican. He observes: “[T]he reason for Carter’s horrible failure in economic policy is plain enough. He is not a Democrat — at least in anything more recent than the Grover Cleveland sense of the word.” Grover Cleveland, it must be remembered, was an austerity Democratic who presided over an economic depression in the late 19th century. According to Schlesinger, Carter is “an alleged Democrat” who “won the presidency with demagogic attacks on the horrible federal bureaucracy and as president made clear in the most explicit way his rejection of… affirmative government…. But what voters repudiated in 1980 [Carter’s defeat] was not liberalism but the miserable result of the conservative economic policies of the last half dozen years.” (Leuchtenburg 17)

It was Carter who, as the first Evangelical president, helped to create a new era of politicized religion. He was a conservative culture warrior seeking moral reform, as part of the Cold War fight against Godless communism — of course, conservatism meant something far different back then, as it still could be distinguished from the reactionary right-wing. Strange as it seems, Carter was a conservative who wanted to conserve, although he didn’t want conserve a progressive worldview. His austerity economics went hand in hand with an antagonism toward welfare, unions, and leftist activists. New Deal Progressivism was mortally wounded under the Carter administration.

As fellow Southerners, Carter and Clinton were responding to Nixon’s Southern Strategy by rebranding the Democratic Party with conservative rhetoric and policies. There was a more business-friendly attitude. In place of progressivism, what took hold was realpolitik pessimism but with a friendly face.