Why is community dead: In which I blame colleges.

2300_white_death_1_0323

By any objective analysis, life in modern America is pretty darn good. You probably didn’t die in childbirth and neither did half of your children. You haven’t died of smallpox or polio. You probably haven’t lived through a famine or war. Cookies and meat are cheap, houses are big, most of us do rather little physical labor, and we can carry the collected works of Shakespeare, Tolstoy, and Wikipedians in our pockets. We have novacaine for tooth surgery. If you avoid drugs and don’t eat too much, there’s a very good chance you’ll survive into your eighties.

Yet anxiety is skyrocketing.  Something about modern life doesn’t seem to agree with people.

In the past, people grew up in small towns or rural areas near small towns, knew most of the people in their neighborhoods, went to school, got jobs, and got married. They moved if they needed more land or saw opportunities in the gold fields, but most stayed put.
We know this because we can read about it in historical books.

One of the results was strong continuity of people in a particular place, and strong continuity of people allowed the development of those “civic associations” people are always going on about. Kids joined clubs at school, clubs at church, then transitioned into adult-aged clubs when they graduated. At every age, there were clubs, and clubs organized and ran events for the community.

Of course club membership was mediated by physical location–if you live in a town you will be in more clubs than if you live in the country and have to drive an hour to get there–but in general, life revolved around clubs (and church, which we can generously call another kind of club, with its own sub clubs.)

In such an environment, it is easy to see how someone could meet their sweetheart at 16, become a functioning member of society at 18, get a job, put a down payment on a house, get married by 20 or 22 and start having children.

Today, people go to college.

Forget your highschool sweetheart: you’re never going to see her again.

After college, people typically move again, because the job they’ve spent 4 years training for often isn’t in the same city as their college.

So forget all of your college friends: chances are you’ll never see any of them again, either.

Now you’re living in a strange city, full of strangers. You know no one. You are part of no clubs. No civic organizations. You feel no connection to anyone.

county-economic-status_fy2015_map“Isn’t diversity great?” someone crows over kebabs, and you think “Hey, at least those Muslims over there have each other to talk to.” Soon you find yourself envying the Hispanics. They have a community. You have a bar.

People make do. They socialize after work. They reconnect with old friends on Facebook and discover that their old friends are smug and annoying because Facebook is a filter that turns people smug and annoying.

But you can’t repair all of the broken connections.
Meanwhile, all of those small, rural towns have lost their young adults. Many of them have no realistic future for young people who stay; everyone who can leave, does. All that’s left behind are children, old people, and the few folks who didn’t quite make it into college.

The cities bloat with people who feel no connection to each other and small towns wither and die.

As the Guardian Reports: Why are so many people dying of opiate overdoses?:

Never mind the ‘war on drugs’ or laying all blame with pharmas, this epidemic exists because millions live in a world without hope, certainty and structure…

The number one killer of Americans under the age of 50 isn’t cancer, or suicide, or road traffic accidents. It’s drug overdoses. They have quadrupled since 1999. More than 52,000 Americans died from drug overdoses last year. Even in the UK, where illegal drug use is on the decline, overdose deaths are peaking, having grown by 10% from 2015 to 2016 alone. …

Opioids, whatever their source, bond with receptors all over our bodies. Opioid receptors evolved to protect us from panic, anxiety and pain – a considerate move by the oft-callous forces of evolution. …

The overdose epidemic compels us to face one of the darkest corners of modern human experience head on, to stop wasting time blaming the players and start looking directly at the source of the problem. What does it feel like to be a youngish human growing up in the early 21st century? Why are we so stressed out that our internal supply of opioids isn’t enough? …

You get opioids from your own brain stem when you get a hug. Mother’s milk is rich with opioids, which says a lot about the chemical foundation of mother-child attachment. When rats get an extra dose of opioids, they increase their play with each other, even tickle each other. And when rodents are allowed to socialise freely (rather than remain in isolated steel cages) they voluntarily avoid the opiate-laden bottle hanging from the bars of their cage. They’ve already got enough. …

So what does it say about our lifestyle if our natural supply isn’t sufficient and so we risk our lives to get more? It says we are stressed, isolated and untrusting.

(Note: college itself is enjoyable and teaches people valuable skills. This thread is not opposed to “learning things,” just to an economic system that separates people from their loved ones.)

Advertisements

Racism OCD and Other Political Neuroses 

 

tumblr_inline_o15j3nj8rc1slq602_500
Source: Evangelion/blog thereupon

In his post on the Chamber of Guf, Slate Star Codex discussed a slate of psychiatric conditions where the sufferer becomes obsessed with not sinning in some particular way. In homosexual OCD, for example, the sufferer becomes obsessed with fear that they are homosexual or might have homosexual thoughts despite not actually being gay; people with incest OCD become paranoid that they might have incestuous thoughts, etc. Notice that in order to be defined as OCD, the sufferers have to not actually be gay or interested in sex with their relatives–this is paranoia about a non-existent transgression. Scott also notes that homosexual OCD is less common among people who don’t think of homosexuality as a sin, but these folks have other paranoias instead.

The “angel” in this metaphor is the selection process by which the brain decides which thoughts, out of the thousands we have each day, to focus on and amplify; “Guf” is the store of all available thoughts. Quoting Scott:

I studied under a professor who was an expert in these conditions. Her theory centered around the question of why angels would select some thoughts from the Guf over others to lift into consciousness. Variables like truth-value, relevance, and interestingness play important roles. But the exact balance depends on our mood. Anxiety is a global prior in favor of extracting fear-related thoughts from the Guf. Presumably everybody’s brain dedicates a neuron or two to thoughts like “a robber could break into my house right now and shoot me”. But most people’s Selecting Angels don’t find them worth bringing into the light of consciousness. Anxiety changes the angel’s orders: have a bias towards selecting thoughts that involve fearful situations and how to prepare for them. A person with an anxiety disorder, or a recent adrenaline injection, or whatever, will absolutely start thinking about robbers, even if they consciously know it’s an irrelevant concern.

In a few unlucky people with a lot of anxiety, the angel decides that a thought provoking any strong emotion is sufficient reason to raise the thought to consciousness. Now the Gay OCD trap is sprung. One day the angel randomly scoops up the thought “I am gay” and hands it to the patient’s consciousness. The patient notices the thought “I am gay”, and falsely interprets it as evidence that they’re actually gay, causing fear and disgust and self-doubt. The angel notices this thought produced a lot of emotion and occupied consciousness for a long time – a success! That was such a good choice of thought! It must have been so relevant! It decides to stick with this strategy of using the “I am gay” thought from now on. …

Politics has largely replaced religion for how most people think of “sin,” and modern memetic structures seem extremely well designed to amplify political sin-based paranoia, as articles like “Is your dog’s Halloween costume racist?” get lots of profitable clicks and get shared widely across social media platforms, whether by fans or opponents of the article.

Both religions and political systems have an interest in promoting such concerns, since they also sell the cures–forgiveness and salvation for the religious; economic and social policies for the political. This works best if it targets a very common subset of thoughts, like sexual attraction or dislike of random strangers, because you really can’t prevent all such thoughts, no matter how hard you try.

The original Tiny House
Medieval illustration of anchorite cell

Personal OCD is bad enough; a religious sufferer obsessed with their own moralistic sin may feel compelled to retreat to a monastery or wall themselves up to avoid temptation. If a whole society becomes obsessed, though, widespread paranoia and social control may result. (Society can probably be modeled as a meta-brain.)

I propose that our society, due to its memetic structure, is undergoing OCD-inducing paranoia spirals where the voices of the most paranoid are being allowed to set political and moral directions. Using racism as an example, it works something like this:

First, we have what I’ll call the Aristotelian Mean State: an appropriate, healthy level of in-group preference that people would not normally call “racism.” This Mean State is characterized by liking and appreciating one’s own culture, generally preferring it to others, but admitting that your culture isn’t perfect and other cultures have good points, too.

Deviating too far from this mean is generally considered sinful–in one direction, we get “My culture is the best and all other cultures should die,” and too far in the other, “All other cultures are best and my culture should die.” One of these is called “racism,” the other “treason.”

When people get Racism OCD, they become paranoid that even innocuous or innocent things–like dog costumes–could be a sign of racism. In this state, people worry about even normal, healthy expressions of ethnic pride, just as a person with homosexual OCD worries about completely normal appreciation of athleticism or admiration of a friend’s accomplishments.

Our culture then amplifies such worries by channeling them through Tumblr and other social media platforms where the argument “What do you mean you’re not against racism?” does wonders to break down resistance and convince everyone that normal, healthy ethnic feelings are abnormal, pathological racism and that sin is everywhere, you must constantly interrogate yourself for sin, you must constantly learn and try harder not to be racist, etc. There is always some new area of life that a Tumblrista can discover is secretly sinful, though you never realized it before, spiraling people into new arenas of self-doubt and paranoia.

As for the rest of the internet, those not predisposed toward Racism OCD are probably predisposed toward Anti-Racism OCD. Just as people with Racism OCD see racism everywhere, folks with Anti-Racism OCD see anti-racism everywhere. These folks think that even normal, healthy levels of not wanting to massacre the outgroup is pathological treason. (This is probably synonymous with Treason OCD, but is currently in a dynamic relationship with the perception that anti-racists are everywhere.)

Since there are over 300 million people in the US alone–not to mention 7 billion in the world–you can always find some case to justify paranoia. You can find people who say they merely have a healthy appreciation for their own culture but really do have murderous attitudes toward the out-group–something the out-group, at least, has good reason to worry about. You can find people who say they have a healthy attitude toward their own group, but still act in ways that could get everyone killed. You can find explicit racists and explicit traitors, and you can find lots of people with amplified, paranoid fears of both.

These two paranoid groups, in turn, can feed off each other, each pointing at the the other and screaming that everyone trying to promote “moderatism” is actually the worst sinners of the other side in disguise and therefore moderatism itself is evil. This feedback loop gives us things like the “It’s okay to be white” posters, which manages to make an entirely innocuous statement sound controversial due to our conviction that people only make innocuous statements because they are trying to make the other guy sound like a paranoid jerk who disputes innocuous statements.

Racism isn’t the only sin devolving into OCD–we can also propose Rape OCD, where people become paranoid about behaviors like flirting, kissing, or even thinking about women. There are probably other OCDs (trans OCD? food contamination OCD) but these are the big ones coming to mind right now.

Thankfully, Scott also proposes that awareness of our own psychology may allow us to recognize and moderate ourselves:

All of these can be treated with the same medications that treat normal OCD. But there’s an additional important step of explaining exactly this theory to the patient, so that they know that not only are they not gay/a pedophile/racist, but it’s actually their strong commitment to being against homosexuality/pedophilia/racism which is making them have these thoughts. This makes the thoughts provoke less strong emotion and can itself help reduce the frequency of obsessions. Even if it doesn’t do that, it’s at least comforting for most people.

The question, then, is how do we stop our national neuroses from causing disasters?

What IS “Social Studies”?

Sometimes you can’t see the forest for the trees, and sometimes you look at your own discipline and can’t articulate what, exactly, the point of it is.

Yes, I know which topics social studies covers. History, civics, geography, world cultures, reading maps, traffic/pedestrian laws, etc. Socialstudies.org explains, “Within the school program, social studies provides coordinated, systematic study drawing upon such disciplines as anthropology, archaeology, economics, geography, history, law, philosophy, political science, psychology, religion, and sociology, as well as appropriate content from the humanities, mathematics…” etc. (I’m sure you did a lot of archaeology back in elementary school.)

But what is the point of lumping all of these things together? Why put psychology, geography, and law into the same book, and how on earth is that coordinated or systematic?

The points of some other school subjects are obvious. Reading and writing allow you to decode and encode information, a process that has massively expanded the human ability to learn and “remember” things by freeing us from the physical constraints of our personal memories. We can learn from men who lived a thousand years ago or a thousand miles away, and add our bit to the Great Conversation that stretches back to Homer and Moses.

Maths allow us to quantify and measure the world, from “How much do I owe the IRS this year?” to “Will this rocket land on the moon?” (It is also, like fiction, pleasurable for its own sake.) And science and engineering, of course, allow us to make and apply factual observations about the real world–everything from “Rocks accelerate toward the earth at a rate of 9.8m/s^2” to “This bridge is going to collapse.”

But what is social studies? The question bugged me for months, until Napoleon Chagnon–or more accurately, the Yanomamo–provided an answer.

Chagnon is a anthropologist who carefully documented Yanomamo homicide and birth rates, and found that the Yanomamo men who had killed the most people went on to father the most children–providing evidence for natural selection pressures making the Yanomamo more violent and homicidal over time, and busting the “primitive peoples are all lovely egalitarians with no crime or murder” myth.

In an interview I read recently, Chagnon was asked what the Yanomamo made of him, this random white guy who came to live in their village. Why was he there? Chagnon replied that they thought he had come:

“To learn how to be human.”

Sometimes we anthropologists lose the signal in the noise. We think our purpose is to document primitive tribes before they go extinct (and that is part of our purpose.) But the Yanomamo are correct: the real reason we come is to learn how to be human.

All of school has one purpose: to prepare the child for adulthood.

The point of social studies is prepare the child for full, adult membership in their society. You must learn the norms, morals, and laws of your society. The history and geography of your society. You learn not just “How a bill becomes a law” but why a bill becomes a law. If you are religious, your child will also learn about the history and moral teachings of their religion.

Most religions have some kind of ceremony that marks the beginning of religious adulthood. For example, many churches practice the rite of Confirmation, in which teens reaffirm their commitment to Christ and become full members of the congregation. Adult Baptism functions similarly in some denominations.

Judaism has the Bar (and Bat) Mitzvah, whose implications are quite clearly articulated. When a child turns 13 (or in some cases, 12,) they are now expected to be moral actors, responsible for their own behavior. They now make their own decisions about following Jewish law, religious duties, and morality.

But there’s an upside: the teen is also now able to part of a minyan, the 10-person group required for (certain) Jewish prayers, Torah legal study; can marry*; and can testify before a Rabbinic court.

*Local laws still apply.

In short, the ceremony marks the child’s entry into the world of adults and full membership in their society. (Note: obviously 13 yr olds are not treated identically to 33 yr olds; there are other ceremonies that mark the path to maturity.)

Whatever your personal beliefs, the point of Social Studies is to prepare your child for full membership in society.

A society is not merely an aggregation of people who happen to live near each other and observe the same traffic laws (though that is important.) It is a coherent group that believes in itself, has a common culture, language, history, and even literature (often going back thousands of years) about its heroes, philosophy, and values.

To be part of society is to be part of that Great Conversation I referenced above.

But what exactly society is–and who is included in it–is a hotly debated question. Is America the Land of the Free and Home of the Brave, or is it a deeply racist society built on slavery and genocide? As America’s citizens become more diverse, how do these newcomers fit into society? Should we expand the canon of Great Books to reflect our more diverse population? (If you’re not American, just substitute your own country.)

These debates can make finding good Social Studies resources tricky. Young students should not be lied to about their ancestors, but neither should they be subjected to a depressing litany of their ancestors’ sins. You cannot become a functional, contributing member of a society you’ve been taught to hate or be ashamed of.

Too often, I think, students are treated to a lop-sided curriculum in which their ancestors’ good deeds are held up as “universal” accomplishments while their sins are blamed on the group as a whole. The result is a notion that they “have no culture” or that their people have done nothing good for humanity and should be stricken from the Earth.

This is not how healthy societies socialize their children.

If you are using a pre-packaged curriculum, it should be reasonably easy to check whether the makers hold similar values as yourself. If you use a more free-form method (like I do,) it gets harder. For example, YouTube* is a great source for educational videos about all sorts of topics–math, grammar, exoplanets, etc.–so I tried looking up videos on American history. Some were good–and some were bad.

*Use sensible supervision

For example, here’s a video that looked good on the thumbnail, but turned out quite bad:

From the description:

In which John Green teaches you about the Wild, Wild, West, which as it turns out, wasn’t as wild as it seemed in the movies. When we think of the western expansion of the United States in the 19th century, we’re conditioned to imagine the loner. The self-reliant, unattached cowpoke roaming the prairie in search of wandering calves, or the half-addled prospector who has broken from reality thanks to the solitude of his single-minded quest for gold dust. While there may be a grain of truth to these classic Hollywood stereotypes, it isn’t a very big grain of truth. Many of the pioneers who settled the west were family groups. Many were immigrants. Many were major corporations. The big losers in the westward migration were Native Americans, who were killed or moved onto reservations. Not cool, American pioneers.

Let’s work through this line by line. What is the author’s first priority: teaching you something new about the West, or telling you that the things you believe are wrong?

Do you think it would be a good idea to start a math lesson by proclaiming, “Hey kids, I bet you get a lot of math problems wrong”? No. Don’t start a social studies lesson that way, either.

There is no good reason to spend valuable time bringing up incorrect ideas simply because a child might hold them; you should always try to impart correct information and dispel incorrect ideas if the child actually holds them. Otherwise the child is left not with a foundation of solid knowledge, but with what they thought they knew in tatters, with very little to replace it.

Second, is the Western movie genre really so prominent these days that we must combat the pernicious lies of John Wayne and the Lone Ranger? I don’t know about you, but I worry more about my kids picking up myths from Pokemon than from a genre whose popularity dropped off a cliff sometime back in the 80s.

“We are conditioned to think of the loner.” Conditioned. Yes, this man thinks that you have been trained like a dog to salivate at the ringing of a Western-themed bell, the word “loner” popping into your head. The inclusion of random psychology terms where they don’t belong is pseudo-intellectual garbage.

Updated values chart!

The idea of the “loner” cowboy and prospector, even in their mythologized form, is closer to the reality than the picture he draws. On the scale of nations, the US is actually one of the world’s most indivdualist, currently outranked only by Canada, The Netherlands, and Sweden.

Without individualism, you don’t get the notion of private property. In many non-Western societies, land, herds, and other wealth is held collectively by the family or clan, making it nearly impossible for one person (or nuclear family) to cash out his share, buy a wagon, and head West.

I have been reading Horace Kephart’s Our Southern Highlanders, an ethnography of rural Appalachia published in 1913. Here is a bit from the introduction:

The Southern highlands themselves are a mysterious realm. When I prepared, eight years ago, for my first sojourn in the Great Smoky Mountains, which form the master chain of the Appalachian system, I could find in no library a guide to that region. The most diligent research failed to discover so much as a magazine article, written within this generation, that described the land and its people. Nay, there was not even a novel or a story that showed intimate local knowledge. Had I been going to Teneriffe or Timbuctu, the libraries would have furnished information a-plenty; but about this housetop of eastern America they were strangely silent; it was terra incognita.

On the map I could see that the Southern Appalachians cover an area much larger than New England, and that they are nearer the center of our population than any other mountains that deserve the name. Why, then, so little known? …

The Alps and the Rockies, the Pyrennees and the Harz are more familiar to the American people, in print and picture, if not by actual visit, than are the Black, the Balsam, and the Great Smoky Mountains. …For, mark you, nine-tenths of the Appalachian population are a sequestered folk. The typical, the average mountain man prefers his native hills and his primitive ancient ways. …

The mountaineers of the South are marked apart from all other folks by dialect, by customs, by character, by self-conscious isolation. So true is this that they call all outsiders “furriners.” It matters not whether your descent be from Puritan or Cavalier, whether you come from Boston or Chicago, Savannah or New Orleans, in the mountains you are a “furriner.” A traveler, puzzled and scandalized at this, asked a native of the Cumberlands what he would call a “Dutchman or a Dago.” The fellow studied a bit and then replied: “Them’s the outlandish.” …

As a foretaste, in the three and a half miles crossing Little House and Big House mountains, one ascends 2,200 feet, descends 1,400, climbs again 1,600, and goes down 2,000 feet on the far side. Beyond lie steep and narrow ridges athwart the way, paralleling each other like waves at sea. Ten distinct mountain chains are scaled and descended in the next forty miles. …

The only roads follow the beds of tortuous and rock-strewn water courses, which may be nearly dry when you start out in the morning, but within an hour may be raging torrents. There are no bridges. One may ford a dozen times in a mile. A spring “tide” will stop all travel, even from neighbor to neighbor, for a day or two at a time. Buggies and carriages are unheard of. In many districts the only means of transportation is with saddlebags on horseback, or with a “tow sack” afoot. If the pedestrian tries a short-cut he will learn what the natives mean when they say: “Goin’ up, you can might’ nigh stand up straight and bite the ground; goin’ down, a man wants hobnails in the seat of his pants.” …

Such difficulties of intercommunication are enough to explain the isolation of the mountaineers. In the more remote regions this loneliness reaches a degree almost unbelievable. Miss Ellen Semple, in a fine monograph published in[Pg 23] the Geographical Journal, of London, in 1901, gave us some examples:

“These Kentucky mountaineers are not only cut off from the outside world, but they are separated from each other. Each is confined to his own locality, and finds his little world within a radius of a few miles from his cabin. There are many men in these mountains who have never seen a town, or even the poor village that constitutes their county-seat…. The women … are almost as rooted as the trees. We met one woman who, during the twelve years of her married life, had lived only ten miles across the mountain from her own home, but had never in this time been back home to visit her father and mother. Another back in Perry county told me she had never been farther from home than Hazard, the county-seat, which is only six miles distant. Another had never been to the post-office, four miles away; and another had never seen the ford of the Rockcastle River, only two miles from her home, and marked, moreover, by the country store of the district.”

When I first went into the Smokies, I stopped one night in a single-room log cabin, and soon had the good people absorbed in my tales of travel beyond the seas. Finally the housewife said to me, with pathetic resignation: “Bushnell’s the furdest ever I’ve been.” Bushnell, at that time, was a hamlet of thirty people, only seven miles from where we sat. When I lived alone on “the Little Fork of Sugar Fork of[Pg 24] Hazel Creek,” there were women in the neighborhood, young and old, who had never seen a railroad, and men who had never boarded a train, although the Murphy branch ran within sixteen miles of our post-office.

And that’s just Appalachia. What sorts of men and women do you think settled the Rockies or headed to the Yukon? Big, gregarious families that valued their connections to society at large?

Then there are the railroads. The video makes a big deal about the railroads being funded by the government, as proof that Americans weren’t “individuals” but part of some grand collectivist society.

Over in reality, societies with more collectivist values, like Pakistan, don’t undertake big national projects. In those societies, your loyalty is to your clan or kin group, and the operative level of social planning and policy is the clan. Big projects that benefit lots of people, not just particular kin networks, tend not to get funded because people do not see themselves as individuals acting within a larger nation that can do big projects that benefit individual people. Big infrastructure projects, especially in the 1800s, were almost entirely limited to societies with highly individualistic values.

Finally we have the genocide of the American Indians. Yes, some were definitely killed; the past is full of sins. But “You’re wrong, your self-image is wrong, and your ancestors were murderers,” is not a good way to introduce the topic.

It’s a pity the video was not good; the animation was well-done. It turns out that people have far more strident opinions about “Was Westward Expansion Just?” than “Is Pi Irrational?”

I also watched the first episode of Netflix’s new series, The Who Was? Show, based on the popular line of children’s biographies. It was an atrocity, and not just because of the fart jokes. The episode paired Benjamin Franklin and Mahatma Gandhi. Gandhi was depicted respectfully, and as the frequent victim of British racism. Franklin was depicted as a buffoon who hogged the spotlight and tried to steal or take credit for other people’s ideas.

It made me regret buying a biography of Marie Curie last week.

If your children are too young to read first-hand ethnographic accounts of Appalachia and the frontier, what do I recommend instead? Of course there are thousands of quality books out there, and more published every day, but here are a few:

A Child’s Introduction to The World

The Usborne Book of Living Long Ago: Everyday Life Through the Ages

What Your [X] Grader Needs to Know So far I like these, but I have not read them all the way through.

DK: When on Earth?

More important than individual resources, though, is the attitude you bring to the subject.

 

Before we finish, I’d like to note that “America” isn’t actually the society I feel the closest connection to. After all, there are a lot of people here whom I don’t like. The government has a habit of sending loyal citizens to die in stupid wars and denying their medical treatment when they return, and I don’t even know if the country will still exist in meaningful form in 30 years. I think of my society as more “Civilization,” or specifically, “People engaged in the advancement of knowledge.”

Anthropology Friday: Japan pt 3

 

Welcome back to Anthropology Friday. Today we are continuing with Sidney L. Gulick’s Evolution of the Japanese, Social and Psychic, published in 1903. Gulick was a Puritan missionary who moved to Japan shortly after the “opening of Japan” and Meiji Restoration. He wrote at a time when very Japanese society was changing at break-neck speed and very few accounts of Japan existed at all in the West. (As usual, quotes will be in “” instead of blocks)

Cheerfulness

“Many writers have dwelt with delight on the cheerful disposition that seems so common in Japan. Lightness of heart, freedom from all anxiety for the future, living chiefly in the present, these and kindred features are pictured in glowing terms. And, on the whole, these pictures are true to life. The many flower festivals are made occasions for family picnics when all care seems thrown to the wind. There is a simplicity and a freshness and a freedom from worry that is delightful to see. But it is also remarked that a change in this regard is beginning to be observed. The coming in of Western machinery, methods of government, of trade and of education, is introducing customs and cares, ambitions and activities, that militate against the older ways. Doubtless, this too is true. If so, it but serves to establish the general proposition of these pages that the more outstanding national characteristics are largely the result of special social conditions, rather than of inherent national character. …

“Yet the Japanese are by no means given up to a cheerful view of life. Many an individual is morose and dejected in the extreme. This disposition is ever stimulated by the religious teachings of Buddhism. Its great message has been the evanescent character of the present life. Life is not worth living, it urges; though life may have some pleasures, the total result is disappointment and sorrow. Buddhism has found a warm welcome in the hearts of many Japanese. For more than a thousand years it has been exercising a potent influence on their thoughts and lives. Yet how is this consistent with the cheerful disposition which seems so characteristic of Japan? The answer is not far to seek. Pessimism is by its very nature separative, isolating, silent. Those oppressed by it do not enter into public joys. They hide themselves in monasteries, or in the home. The result is that by its very nature the actual pessimism of Japan is not a conspicuous feature of national character.

“The judgment that all Japanese are cheerful rests on shallow grounds. Because, forsooth, millions on holidays bear that appearance, and because on ordinary occasions the average man and woman seem cheerful and happy, the conclusion is reached that all are so. No effort is made to learn of those whose lives are spent in sadness and isolation. I am convinced that the Japan of old, for all its apparent cheer, had likewise its side of deep tragedy. Conditions of life that struck down countless individuals, and mental conditions which made Buddhism so popular, both point to this conclusion.”

EvX: See: Hikikomori:

In Japan, hikikomori (Japanese: ひきこもり or 引き籠り, lit. “pulling inward, being confined”, i.e., “acute social withdrawal“) are reclusive adolescents or adults who withdraw from social life, often seeking extreme degrees of isolation and confinement. Hikikomori refers to both the phenomenon in general and the recluses themselves. Hikikomori have been described as loners or “modern-day hermits“.[1] Estimates reveal that nearly half a million Japanese youth have become social recluses.[2]...

According to government figures released in 2010, there are 700,000 individuals living as hikikomori with an average age of 31.[10] Still, the numbers vary widely among experts. These include the hikikomori who are now in their 40s and have spent 20 years in isolation. This group is generally referred to as the “first-generation hikikomori.” There is concern about their reintegration into society in what is known as “the 2030 Problem,” when they are in their 60s and their parents begin to die.[10] Additionally, the government estimates that 1.55 million people are on the verge of becoming hikikomori.[10] Tamaki Saitō, who first coined the phrase, originally estimated that there may be over one million hikikomori in Japan, although this was not based on national survey data. Nonetheless, considering that hikikomori adolescents are hidden away and their parents are often reluctant to talk about the problem, it is extremely difficult to gauge the number accurately.[11]

I suspect this is becoming a problem in the West, too. But back to Gulick:

Work Ethic

“The Japanese give the double impression of being industrious and diligent on the one hand, and, on the other, of being lazy and utterly indifferent to the lapse of time. The long hours during which they keep at work is a constant wonder to the Occidental. I have often been amazed in Fukuoka to find stores and workshops open, apparently in operation, after ten and sometimes even until eleven o’clock at night, while blacksmiths and carpenters and wheelwrights would be working away as if it were morning. Many of the factories recently started keep very long hours. Indeed most of the cotton mills run day and night, having two sets of workers, who shift their times of labor every week. Those who work during the night hours one week take the day hours the following week. In at least one such factory, with which I am acquainted, the fifteen hundred girls who work from six o’clock Saturday evening until six o’clock Sunday morning, are then supposed to have twenty-four hours of rest before they begin their day’s work Monday morning; but, as a matter of fact, they must spend three or four and sometimes five hours on Sunday morning cleaning up the factory. …

“But there are equally striking illustrations of an opposite nature. The farmers and mechanics and carpenters, among regular laborers, and the entire life of the common people in their homes, give an impression of indifference to the flight of time, if not of absolute laziness. The workers seem ready to sit down for a smoke and a chat at any hour of the day. In the home and in ordinary social life, the loss of time seems to be a matter of no consequence whatever. Polite palaver takes unstinted hours, and the sauntering of the people through the street emphasizes the impression that no business calls oppress them.”

EvX: This seems like an apt time to plug The Birth of Sake, a documentary available on Netlfix. Here’s the trailer:

Trust, suspicion, and change

“Two other strongly contrasted traits are found in the Japanese character, absolute confidence and trustfulness on the one hand, and suspicion on the other. It is the universal testimony that the former characteristic is rapidly passing away; in the cities it is well-nigh gone. But in the country places it is still common. The idea of making a bargain when two persons entered upon some particular piece of work, the one as employer, the other as employed, was entirely repugnant to the older generation, since it was assumed that their relations as inferior and superior should determine their financial relations; the superior would do what was right, and the inferior should accept what the superior might give without a question or a murmur. Among the samurai, where the arrangement is between equals, bargaining or making fixed and fast terms which will hold to the end, and which may be carried to the courts in case of differences, was a thing practically unknown in the older civilization. Everything of a business nature was left to honor, and was carried on in mutual confidence.

“A few illustrations of this spirit of confidence from my own experience may not be without interest. On first coming to Japan, I found it usual for a Japanese who wished to take a jinrikisha to call the runner and take the ride without making any bargain, giving him at the end what seemed right. And the men generally accepted the payment without question. I have found that recently, unless there is some definite understanding arrived at before the ride, there is apt to be some disagreement, the runner presuming on the hold he has, by virtue of work done, to get more than is customary. This is especially true in case the rider is a foreigner. Another set of examples in which astonishing simplicity and confidence were manifested was in the employment of evangelists. I have known several instances in which a full correspondence with an evangelist with regard to his employment was carried on, and the settlement finally concluded, and the man set to work without a word said about money matters. It need hardly be said that no foreigner took part in that correspondence. …

“This confidence and trustfulness were the product of a civilization resting on communalistic feudalism; the people were kept as children in dependence on their feudal lord; they had to accept what he said and did; they were accustomed to that order of things from the beginning and had no other thought; on the whole too, without doubt, they received regular and kindly treatment. Furthermore, there was no redress for the peasant in case of harshness; it was always the wise policy, therefore, for him to accept whatever was given without even the appearance of dissatisfaction. This spirit was connected with the dominance of the military class. Simple trustfulness was, therefore, chiefly that of the non-military classes. The trustfulness of the samurai sprang from their distinctive training. As already mentioned, when drawing up a bond in feudal times, in place of any tangible security, the document would read, “If I fail to do so and so, you may laugh at me in public.”

“Since the overthrow of communal feudalism and the establishment of an individualistic social order, necessitating personal ownership of property, and the universal use of money, trustful confidence is rapidly passing away. Everything is being more and more accurately reduced to a money basis. The old samurai scorn for money seems to be wholly gone, an astonishing transformation of character. Since the disestablishment of the samurai class many of them have gone into business. Not a few have made tremendous failures for lack of business instinct, being easily fleeced by more cunning and less honorable fellows who have played the “confidence” game most successfully; others have made equally great successes because of their superior mental ability and education. The government of Japan is to-day chiefly in the hands of the descendants of the samurai class. …

“Passing now from the character of trustful confidence, we take up its opposite, suspiciousness. The development of this quality is a natural result of a military feudalism such as ruled Japan for hundreds of years. Intrigue was in constant use when actual war was not being waged. In an age when conflicts were always hand to hand, and the man who could best deceive his enemy as to his next blow was the one to carry off his head, the development of suspicion, strategy, and deceit was inevitable. The most suspicious men, other things being equal, would be the victors; they, with their families, would survive and thus determine the nature of the social order. The more than two hundred and fifty clans and “kuni,” “clan territory,” into which the land was divided, kept up perpetual training in the arts of intrigue and subtlety which are inevitably accompanied by suspicion.”

EvX: You can almost hear the HBD argument being made…

“Modern manifestations of this characteristic are frequent. Not a cabinet is formed, but the question of its make-up is discussed from the clannish standpoint. Even though it is now thirty years since the centralizing policy was entered upon and clan distinctions were effectually broken down, yet clan suspicion and jealousy is not dead.”

Politeness

“The foreigner is impressed by the constant need of care in conversation, lest he be thought to mean something more or other than he says. When we have occasion to criticise anything in the Japanese, we have found by experience that much more is inferred than is said. Shortly after my arrival in Japan I was advised by one who had been in the land many years to be careful in correcting a domestic or any other person sustaining any relation to myself, to say not more than one-tenth of what I meant, for the other nine-tenths would be inferred. Direct and perfectly frank criticism and suggestion, such as prevail among Anglo-Americans at least, seem to be rare among the Japanese.”

EvX: This, I gather, is still true.

That’s enough for now. See you next Friday.

Re: Eurozine’s How to Change Human History

Some of you have asked  for my opinions on Davids Graeber and Wengrow’s recently published an article, How to change the course of human history (at least, the part that’s already happened):

The story we have been telling ourselves about our origins is wrong, and perpetuates the idea of inevitable social inequality. David Graeber and David Wengrow ask why the myth of ‘agricultural revolution’ remains so persistent, and argue that there is a whole lot more we can learn from our ancestors.

The article is long and difficult to excerpt, so I’m going to summarize:

The traditional tale of how our idyllic, peaceful, egalitarian, small-group hunter-gatherer past gave way to our warlike, sexist, racist, violent, large-city agrarian present gives people the impression that hierarchy and violence are inevitable parts of our economic system. However, the traditional tale is wrong–the past was actually a lot more complicated than you’ve been told. Therefore, there is no historical pattern and the real source of all bad things is actually the family.

The final conclusion is pulled out of nowhere:

Egalitarian cities, even regional confederacies, are historically quite commonplace. Egalitarian families and households are not. Once the historical verdict is in, we will see that the most painful loss of human freedoms began at the small scale – the level of gender relations, age groups, and domestic servitude – the kind of relationships that contain at once the greatest intimacy and the deepest forms of structural violence. If we really want to understand how it first became acceptable for some to turn wealth into power, and for others to end up being told their needs and lives don’t count, it is here that we should look. Here too, we predict, is where the most difficult work of creating a free society will have to take place.

Since “inequality begins in the family” is supported nowhere in the text, we will ignore it.

  1. What about the “traditional narrative”? Did hunter-gathers live in small, peaceful, egalitarian, idyllic communities? Or are the Davids correct that this is a myth?

It’s a myth. Mostly.

While we have almost no information about people’s opinions on anything before the advent of writing, there’s no evidence from any hunter-gatherer society we have actually been able to observe that hunter-gathering leads naturally to egalitarianism or peacefulness.

For example, among the Inuit (Eskimo), hunter-gatherers of the arctic, polyandry (the marriage of one woman to multiple men) didn’t exist because they had particularly enlightened views about women and marriage, but because they had a habit of killing female babies. Too much female infanticide => not enough adult women to go around => men making do.

Why do some groups have high rates of female infanticide? Among other reasons, because in the Arctic, the men do the hunting (seal, fish, caribou, etc.) and the women gather… not a whole lot. (Note: I’m pretty sure the modern Inuit do not practice sex-selective infanticide.)

Polyandry can also be caused by polygamy and simple lack of resources–men who cannot afford to support a wife and raise their own children may content themselves with sharing a wife and contributing what they can to the raising of offspring who might be theirs.

I have yet to encounter in all of my reading any hunter-gatherer or “primitive” society that has anything like our notion of “gender equality” in which women participate equally in the hunting and men do 50% of the child-rearing and gathering, (though some Pygmies are reported to be excellent fathers.) There are simple physical limits here: first, hunter-gatherers don’t have baby formula and men don’t lactate, so the duties of caring for small children fall heavily on their mothers. Many hunter-gatherers don’t even have good weaning foods, and so nurse their children for years longer than most Westerners. Second, hunting tends to require great physical strength, both in killing the animals (stronger arms will get better and more accurate draws on bows and spears) and in hauling the kills back to the tribe (you try carrying a caribou.)

In many horticultural societies, women do a large share of the physical labor of building houses and producing food, but the men do not make up for this by tending the babies. A similar division of labor exists in modern, lower-class African American society, where the women provide for their families and raise the children and then men are largely absent. Modern Rwanda, which suffers a dearth of men due to war and mass genocide, also has a “highly equitable” division of labor; not exactly an egalitarian paradise.

Hunter-gatherers, horticulturalists, and other folks living outside formal states, have very high rates of violence. The Yanomami/o, for example, (who combine horticulture and hunting/foraging) are famous for their extremely high rates of murder and constant warfare. The Aborigines of Australia, when first encountered by outsiders, also had very high rates of interpersonal violence and warfare.

Graph from the Wikipedia
See also my post, “No, Hunter Gatherers were not Peaceful Paragons of Gender Egalitarianism.”

The Jivaro are an Amazonian group similar to the Yanomamo; the Mae Enga, Dugum Dani, Huli, and Gebusi are horticulturalists/hunters from PNG; Murngin are Australian hunter-gatherers.

I know, I know, horticulturalists are not pure hunter-gatherers, even if they do a lot of hunting and gathering. As we’ll discuss below, the transition from hunter-gathering to agriculture is complicated and these are groups that we might describe as “in between”. The real question isn’t whether they bury a few coconuts if they happen to sprout before getting eaten, but whether they have developed large-scale social organization, cities, and/or formal states.

The article protests against using data from any contemporary forager societies, because they are by definition not ancient hunter-gatherers and have been contaminated by contact with non-foraging neighbors (I propose that the Australian Aborigines, however, at first contact were pretty uncontaminated,) but then the article goes on to use data from contemporary forager societies to bolster its own points… so I feel perfectly entitled to do the same thing.

However, we do have some data on ancient violence, eg:

According to this article, 12-14% of skeletons from most (but not all) ancient, pre-agricultural hunter-gatherer groups show signs of violence. Here’s a case of a band of hunter-gatherers–including 6 small children–who were slaughtered by another band of hunter-gatherers 10,000 years ago.

Warfare appears to have been part of the human experience as far back as we look–even chimps wage war against each other, as Jane Goodall documented in her work in the Gombe.

Then there’s the cannibalism. Fijians, for example, who practiced a mixed horticulture/hunter-gathering lifestyle (fishing is a form hunting that looks a lot like gathering,) were notorious cannibals when first encountered by outsiders. (Though they did have something resembling a state at the time.)

Neanderthals butchered each other; 14,700 years ago, hunter-gatherers were butchering and eating each other in Cheddar Gorge, England. (This is the same Cheddar Gorge as the famous Cheddar Man hails from, but CM is 5,000 years younger than these cannibals and probably no relation, as an intervening glacier had forced everyone out of the area for a while. CM also died a violent death, though.)

Or as reported by Real Anthropology:

Increasing amount of archaeological evidence, such as fortifications of territories and pits containing dead humans blown by axes, indicates that warfare originated from prehistoric times, long before the establishment of state societies. Recently, researchers studying the animal bones in Mesolithic layer of Coves de Santa Maira accidentally discovered thirty human bone remains of the pre-Neolithic hunter-gatherer with anthropic marks, indicating behaviors of human cannibalism.

The article would like to emphasize, however, that we don’t really know why these people engaged in cannibalism. Starvation? Funeral rituals? Dismemberment of an enemy they really hated? Like I said, it’s hard to know what people were really thinking without written records.

There was a while in anthropology/archaeology when people were arguing that the spread of pots didn’t necessarily involve the spread of people, as a new pottery style could just spread because people liked it and decided to adopt it; it turns out that sometimes the spread is indeed of pots, and sometimes it’s of people. Similarly, certain anthropologists took to describing hunter-gatherers as “harmless“, but this didn’t involve any actual analysis of violence rates among hunter-gatherers (yes, I’ve read the book.)

In sum: The narrative that our ancestors were peaceful egalitarians is, in most cases, probably nonsense.

  • 2. The Davids also argue that the transition from hunter-gathering to agriculture was more complex than the “traditional narrative” claims.

This is also true. As we’ve already touched on above, there are many economic systems that fall somewhere in between exclusive hunter-gathering and pure agriculture. Nomadic hunters who followed and exploited herds of animals gradually began protecting them from other predators (like wolves) and guiding the animals to areas with food and shelter. The domestication of goats pre-dates the beginning of agriculture (and dogs pre-date goats;) the domestication of reindeer was much more recent, (I reviewed a book on reindeer economies here, here, here, and here.) Again, there is no absolute line between hunters like the Eskimo who annually exploit migrating wild caribou and Lapp (Sami) ranchers who occasionally round up their herds of “domestic” reindeer. The reindeer appreciate that we humans kill off their natural predators (ie wolves) and provide a source of valuable salts (ie urine.) The origin of domestic goats and sheep probably looked similar, though the domestication of cattle was probably a more conscious decision given the bovines’ size.

The hunting of fish also looks a lot more like gathering or even farming, as a single resource area (eg, a bend in the river or a comfortable ocean bay) may be regularly exploited via nets, traps, rakes, weirs, etc.

Horticulture is a form of low-intensity agriculture (literally, gardening.) Some horticulturalists get most of their food from their gardens; others plant a few sprouted coconuts and otherwise get most of their food by hunting and fishing. Horticulture doesn’t require much technology (no plows needed) and typically doesn’t produce that many calories.

It is likely that many “hunter gatherers” understood the principle of “seeds sprout and turn into plants” and strategically planted seeds or left them in places where they wanted plants to grow for centuries or millennia before they began actively tending the resulting plants.

Many hunter-gatherer groups also practice active land management techniques. For example, a group of Melanesians in PNG that hunts crocodiles periodically burns the swamp in which the crocodiles live in order to prevent woody trees from taking over and making the swamp less swampy. By preserving the crocodiles’ habitat, they ensure there are plenty of crocodiles around for them to hunt. (I apologize for the lack of a link to a description of the group, but I saw it in a documentary about hunter-gatherers available on Netflix.)

Large-scale environment management probably also predates the adoption of formal agriculture by thousands of years.

Where the article goes wrong:

  1. Just because something is more complicated than the “simplified” version you commonly hear doesn’t mean, “There is no pattern, all is unknowable, nihilism now.”

Any simplified version of things is, by definition, simplified.

The idea that hunter-gatherers were uniquely peaceful and egalitarian is nonsense; if anything, the opposite may be true. Once you leave behind your preconceptions, you realize that the pattern isn’t “random noise” but but actually that all forms of violence and oppression appear to be decreasing over time. Economies where you can get ahead by murdering your neighbors and stealing their wives have been largely replaced by economies where murdering your neighbors lands you in prison and women go to college. There’s still noise in the data–times we humans kill a lot of each other–but that doesn’t mean there is no pattern.

  • 2. Most hunter-gatherers did, in fact, spend most of their time in small communities

The Davids make a big deal out of the fact that hunter-gatherers who exploit seasonally migrating herds sometimes gather in large groups in order to exploit those herds.  They cite, for example:

Another example were the indigenous hunter-gatherers of Canada’s Northwest Coast, for whom winter – not summer – was the time when society crystallised into its most unequal form, and spectacularly so. Plank-built palaces sprang to life along the coastlines of British Columbia, with hereditary nobles holding court over commoners and slaves, and hosting the great banquets known as potlatch. Yet these aristocratic courts broke apart for the summer work of the fishing season, reverting to smaller clan formations, still ranked, but with an entirely different and less formal structure. In this case, people actually adopted different names in summer and winter, literally becoming someone else, depending on the time of year.

Aside from the fact that they are here citing a modern people as an argument about prehistoric ones (!), the Pacific North West is one of the world’s lushest environments with an amazing natural abundance of huntable (fishable) food. If I had to pick somewhere to ride out the end of civilization, the PNW (and New Zealand) would be high on my list. The material abundance available in the PNW is available almost nowhere else in the world–and wasn’t available to anyone before the First Nations arrived in the area around 13,000 years ago. Our stone-age ancestors 100,000 years ago in Africa certainly weren’t exploiting salmon in British Columbia.

Hunter-gatherers who exploit migrating resources sometimes get all of their year’s food in only 3 or 4 massive hunts. These hunts certainly can involve lots of people, as whole clans will want to work together to round up, kill, and process thousands of animals within the space of a few days.

Even the most massive of these gatherings, however, did not compare in size and scope to our modern cities. A few hundred Inuit might gather for the short arctic summer before scattering back to their igloos; the Mongol capital of Ulan Bator was oft described as nearly deserted as the nomadic herdsmen had little reason to remain in the capital when court was not in session.

(Also, the Davids’ description of Inuit life is completely backwards from the actual anthropology I have read; I’m wondering if he accidentally mixed up the Yupik Eskimo who don’t go by the term “Inuit” with the Canadian Eskimo who do go by “Inuit;” I have not read about the Yupik, but if their lifestyles are different from the Inuit, this would explain the confusion.)

The Davids also cite the behavior of the 19th century Plains Indians, but this is waaay disconnected from any “primitive” lifestyle. Most of the Plains Indians had formerly been farmers before disease, guns, and horses, brought by the Spaniards, disrupted their lives. Without horses (or plows) the great plains and their bison herds were difficult to exploit, and people preferred to live in towns along local riverbanks, growing corn, squash, and beans.

We might generously call these towns “cities,” but none of them were the size of modern cities.

  • 3. Production of material wealth

Hunter-gathering, horticulture, fishing, and herding–even at their best–do not produce that much extra wealth. They are basically subsistence strategies; most people in these societies are directly engaged in food production and so can’t spend their time producing other goods. Nomads, of course, have the additional constraint that they can’t carry much with them under any circumstances.

A society can only have as much hierarchy as it can support. A nomadic tribe can have one person who tells everyone when to pack up and move to the next pasture, but it won’t produce enough food to support an entire class of young adults who do things other than produce food.

By contrast, in our modern, industrial society, less than 2% of people are farmers/ranchers. The other 98% of us are employed in food processing of some sort, careers not related to food at all, or unemployed.

This is why our society can produce parking lots that are bigger and more complex than the most impressive buildings ever constructed by hunter-gatherers.

The fact that, on a few occasions, hunter-gatherers managed to construct large buildings (and Stonehenge was not built by hunter-gatherers but by farmers; the impressive, large stones of Stonehenge were not part of the original layout but erected by a later wave of invaders who killed off 90% of Stonehenge’s original builders) does not mean the average hunter-gatherer lived in complex societies most of the time. They did not, because hunter-gathering could not support complex society, massive building projects, nor aristocracies most of the time.

It is only with the advent of agriculture that people started accumulating enough food that there were enough leftover for any sort of formal, long-term state to start taxing. True, this doesn’t necessarily mean that agriculture has to result in formal states with taxes; it just means that it’s very hard to get that without agriculture. (The one exception is if a nomadic herding society like the Mongols conquers an agricultural state and takes their taxes.)

In sum, yes, the “traditional story” is wrong–but not completely. History was more complicated, violent, and unequal, than portrayed, but the broad outlines of “smaller, simpler” hunter gatherer societies to “bigger, more complex” agricultural societies is basically correct. If anything, the lesson is that civilization has the potential to be a great force for good.

Maybe Terrorists are Actually Just Morons?

Gwern has a fascinating essay about terrorism, Terrorism-is-not-about-Terror:

There is a commonly-believed strategic model of terrorism which we could describe as follows: terrorists are people who are ideologically motivated to pursue specific unvarying political goals; to do so, they join together in long-lasting organizations and after the failure of ordinary political tactics, rationally decide to efficiently & competently engage in violent attacks on (usually) civilian targets to get as much attention as possible and publicity for their movement, and inspire fear & terror in the civilian population, which will pressure its leaders to solve the problem one way or another, providing support for the terrorists’ favored laws and/or their negotiations with involved governments, which then often succeed in gaining many of the original goals, and the organization dissolves.

Unfortunately, this model, is in almost every respect, empirically false.

It’s a great essay, so go read the whole thing before we continue. Don’t worry; I’ll wait.

Done?

Good.

Now, since I know half of you didn’t actually read the essay, I’ll summarize: terrorists are really bad at accomplishing their “objectives.” By any measure, they are really bad at it. Simply doing nothing would, in most cases, further their political goals more effectively.

This is in part because terrorists tend not to conquer and hold land, and in part because terrorism tends to piss off its targets, making them less likely to give in to the terrorists’ demands. Consider 9-11: sure, the buildings fell down, but did it result in America conceding to any of Al-Qaeda’s demands?

The article quotes Abrams 2012:

Jones and Libicki (2008) then examined a larger sample, the universe of known terrorist groups between 1968 and 2006. Of the 648 groups identified in the RAND-MIPT Terrorism Incident database, only 4% obtained their strategic demands. … Chenoweth and Stephan (2008, 2011) provide additional empirical evidence that meting out pain hurts non-state actors at the bargaining table. … These statistical findings are reinforced with structured in-case comparisons highlighting that escalating from nonviolent methods of protest such as petitions, sit-ins, and strikes to deadly attacks tends to dissuade government compromise. … Other statistical research (Abrahms, 2012, Fortna, 2011) demonstrates that when terrorist attacks are combined with such discriminate violence, the bargaining outcome is not additive; on the contrary, the pain to the population significantly decreases the odds of government concessions.3

(Aside: Remember, right-wing violence doesn’t work. It’s stupid and you will fail at accomplishing anything.)

Another “mystery” about terrorism is that it actually doesn’t happen very often. It’s not that hard to drive a truck into a crowd or attack people with a machete. Armies are expensive; coughing on grocery store produce is cheap.

If terrorism is 1. ineffective and 2. not even used that often, why do terrorist groups exist at all?

Terrorists might just be dumb, stupid people who try to deal with their problems by blowing them up, but there’s no evidence to this effect–terrorists are not less intelligent than the average person in their societies, anyway. People who are merely dumb and violent tend to get into fights with their neighbors, not take airplanes hostage.

Gwern suggests a different possibility: People join terrorist organizations because they want to be friends with the other terrorists. They’re like social clubs, but instead of bowling, you talk about how going on jihad would be totally awesome.

Things people crave: Meaning. Community. Brotherhood.

Terrorist organizations provide these to their members, most of whom don’t actually blow themselves up.

Gwern quotes Sageman’s Understanding Terrorist Networks:

Friendships cultivated in the jihad, just as those forged in combat in general, seem more intense and are endowed with special significance. Their actions taken on behalf of God and the umma are experienced as sacred. This added element increases the value of friendships within the clique and the jihad in general and diminishes the value of outside friendships.

Enough about terrorists; let’s talk about Americans:

“Jihad” is currently part of the Islamic cultural script–that is, sometimes Muslims see some form of “jihad” as morally acceptable. (They are not unique in committing terrorism, though–Marxist terrorists have created trouble throughout Latin America, for instance, and the Tamil Tigers of Sri Lanka were one of the world’s deadliest groups.)

Thankfully, though, few major groups in the US see jihad or terrorist violence as acceptable, but… we have our exceptions.

For example, after a Jewish professor, Bret Weinstein, declined to stay home on a “Day of Absence” intended to force whites away from Evergreen State College, WA, violent protests erupted. Bands of students armed with bats and tasers roamed the campus, searching for Weinstein; the poor professor was forced to flee and eventually resign.

(More on Evergreen.)

Antifa are a growing concern in the US, both on-campus and off. As Wikipedia notes:

Antifa groups, along with black bloc activists, were among those who protested the 2016 election of Donald Trump.[10][44] They also participated in the February 2017 Berkeley protests against alt-right[47][48][49][50] speaker Milo Yiannopoulos, where they gained mainstream attention,[27] with media reporting them “throwing Molotov cocktails and smashing windows”[2] and causing $100,000 worth of damage.[51]

Antifa counter-protesters at the 2017 Unite the Right rally in Charlottesville, Virginia in August 2017 “certainly used clubs and dyed liquids against the white supremacists”.[39]

During a Berkeley protest on August 27, 2017, an estimated one hundred antifa protesters joined a crowd of 2,000–4,000 counter-protesters to attack a reported “handful” of alt-right demonstrators and Trump supporters who showed up for a “Say No to Marxism” rally that had been cancelled by organizers due to security concerns. Some antifa activists beat and kicked unarmed demonstrators[51][63] and threatened to smash the cameras of anyone who filmed them.[64]

Antifa, like terrorist groups, typically attract folks who are single and have recently left home–young people who have just lost the community they were raised in and in search of a new one.

The article recounts an amusing incident when a terrorist organization wanted to disband a cell, but struggled to convince its members to abandon their commitment to sacrificing themselves on behalf of jihad. Finally they hit upon a solution: they organized social get-togethers with women, then incentivised the men to get married, get jobs, and have babies. Soon all of the men were settled and raising children, too busy and invested in their new families to risk sacrificing it all for jihad. The cell dissolved.

Even Boko Haram was founded in response to the difficulties young men in Nigeria face in affording brides:

Our recent study found that marriage markets and inflationary brideprice are a powerful driver of participation in violence and drive recruitment into armed groups. Armed groups often arrange low-cost marriages for their members, help members afford brideprice, or provide extra-legal opportunities to acquire the capital necessary to take a wife. In Nigeria, in the years in which Boko Haram gained influence under founder Mohammed Yusuf, “items required for [a] successful [marriage] celebration kept changing in tune with inflation over the years.”66  A resident of the Railroad neighborhood of Maiduguri, where Yusuf established his mosque, recalled that in just a few years, Yusuf had facilitated more than 500 weddings. The group also provided support for young men to become “okada drivers,” who gained popularity for their affordable motorbike taxi services — who often used their profits to afford marriage. Thus, Boko Haram’s early recruits were often attracted by the group’s facilitation of marriage. Even in the aftermath of Yusuf’s assassination by the Nigerian state and the rise of Abubakar Shekau, the group has continued to exploit obstacles to marriage to attract supporters. The women and girls that are abducted by the group, estimated to number more than 6,000, are frequently married off to members of the group.

Antifa of course aren’t the only people in the US who commit violence; the interesting fact here is their organization. As far as I know, Dylan Roof killed more people than Antifa, but Roof acted alone.

source

I suggest, therefore, that the principle thing driving Antifa (and similar organizations) isn’t a rational pursuit of their stated objectives (did driving Milo out of Berkley actually protect any illegal immigrants from deportation?) but the same social factors that drive Muslims to join terrorist groups: camaraderie, brotherhood, and the feeling like they are leading meaningful, moral lives by sacrificing themselves for their chosen cause.

Right-wingers do this, too (the military is an obvious source of “meaning” and “brotherhood” in many people’s lives).

And the pool of unmarried people to recruit into extremist organizations is only growing in America.

We have always been at war with Eurasia--I mean, supported gay marriage
CONFORM

But we don’t have to look to organizations that commit violence to find this pattern. Why change one’s avatar to a rainbow pattern to celebrate gay marriage or overlay a French flag after the Charlie Hebdo attack?

Why spend hours “fighting racism” by “deconstructing whiteness” online when you could do far more to help black people by handing out sandwiches at your local homeless shelter? (The homeless would also appreciate a hot lasagna.) What percentage of people who protest Islamophobia have actually bothered to befriend some Muslims and express support toward them?

The obvious answer is that these activities enhance the actor’s social standing among their friends and online compatriots. Congratulations received for turning your profile picture different colors: objective achieved. Actions that would actually help the targeted group require more effort and return less adulation, since they have to be done in real life.

Liberal groups seem to be better at social organizing–thus I’ve had an easier time coming up with liberal examples of this phenomenon. Conservative political organizations, at least in the US, seem to be smaller and offer less in the way of social benefits (this may be in part because conservatives are more likely to be married, employed, and have children, and because conservatives are more likely to channel such energies into their churches,) but they also do their share of social signaling that doesn’t achieve its claimed goal. “White pride” organizations, for example, generally do little to improve whites’ public image.

But is this an aberration? Or are things operating as designed? What’s the point of friendship and social standing in the first place?

Interestingly, in Jane Goodall‘s account of chimps in the Gombe, we see parallels to the origins of human social structures and friendships. Only male chimps consistently have what we would call “friendships;” females instead tend to live in groups with their children. Male friends benefit from each other’s assistance in hunting and controlling access to other food, like the coveted bananas. A single strong male may dominate a troop of chimps, but a coalition can bring him to a bloody end. Persistent dominance of a chimp troop (and thus dominance of food) is thus easier for males who have a strong coalition on their side–that is, friends.

Man is a political animal:

From these things therefore it is clear that the city-state is a natural growth, and that man is by nature a political animal, and a man that is by nature and not merely by fortune citiless is either low in the scale of humanity or above it … inasmuch as he is solitary, like an isolated piece at draughts.

And why man is a political animal in a greater measure than any bee or any gregarious animal is clear. For nature, as we declare, does nothing without purpose; and man alone of the animals possesses speech. … speech is designed to indicate the advantageous and the harmful, and therefore also the right and the wrong; for it is the special property of man in distinction from the other animals that he alone has perception of good and bad and right and wrong and the other moral qualities, and it is partnership in these things that makes a household and a city-state.

Most people desire to be members in good standing in their communities:

Thus also the city-state is prior in nature to the household and to each of us individually. [20] For the whole must necessarily be prior to the part; since when the whole body is destroyed, foot or hand will not exist except in an equivocal sense… the state is also prior by nature to the individual; for if each individual when separate is not self-sufficient, he must be related to the whole state as other parts are to their whole, while a man who is incapable of entering into partnership, or who is so self-sufficing that he has no need to do so, is no part of a state, so that he must be either a lower animal or a god.

Therefore the impulse to form a partnership of this kind is present in all men by nature… –Aristotle, Politics, Book 1

A couple of other relevant quotes:

From Eysenck’s work on political extremism

Source

The spread of the internet has changed both who we’re talking to (the people in our communities) and how we engage with them, resulting in, I hypothesize, a memetic environment that increasingly favors horizontally (rather than vertically) transmitted memes. (If you are not familiar with this theory, I wrote about it here, here, here, here, here, here, here, and here.) Vertically spread memes tend to come from your parents and are survival-oriented; horizontal memes come from your friends and are social. A change in the memetic environment, therefore, has the potential to change the landscape of social, moral, and political ideas people frequently encounter–and has allowed us to engage in nearly costless, endless social signaling.

The result of that, it appears, is political polarization:

Source
Source

According to Pew:

A decade ago, the public was less ideologically consistent than it is today. In 2004, only about one-in-ten Americans were uniformly liberal or conservative across most values. Today, the share who are ideologically consistent has doubled: 21% express either consistently liberal or conservative opinions across a range of issues – the size and scope of government, the environment, foreign policy and many others.

The new survey finds that as ideological consistency has become more common, it has become increasingly aligned with partisanship. Looking at 10 political values questions tracked since 1994, more Democrats now give uniformly liberal responses, and more Republicans give uniformly conservative responses than at any point in the last 20 years.

This, of course, makes it harder for people to find common ground for compromises.

So if we want a saner, less histrionic political culture, the first step may be encouraging people to settle down, get married, and have children, then work on building communities that let people feel a sense of meaning in their real lives.

Still, I think letting your friends convince you that blowing yourself is a good idea is pretty dumb.

“Cultural Collapse”

Tablet recently had an interesting essay on the theme of “why did Trump win?”

The material-grievances theory and the cultural-resentments theory can fit together because, in both cases, they tell us that people voted for Trump out of a perceived self-interest, which was to improve their faltering economic and material conditions, or else to affirm their cultural standing vis-à-vis the non-whites and the bicoastal elites. Their votes were, from this standpoint, rationally cast. … which ultimately would suggest that 2016’s election was at least a semi-normal event, even if Trump has his oddities. But here is my reservation.

I do not think the election was normal. I think it was the strangest election in American history in at least one major particular, which has to do with the qualifications and demeanor of the winning candidate. American presidents over the centuries have always cultivated, after all, a style, which has been pretty much the style of George Washington, sartorially updated. … Now, it is possible that, over the centuries, appearances and reality have, on occasion, parted ways, and one or another president, in the privacy of his personal quarters, or in whispered instructions to his henchmen, has been, in fact, a lout, a demagogue, a thug, and a stinking cesspool of corruption. And yet, until just now, nobody running for the presidency, none of the serious candidates, would have wanted to look like that, and this was for a simple reason. The American project requires a rigorously republican culture, without which a democratic society cannot exist—a culture of honesty, logic, science, and open-minded debate, which requires, in turn, tolerance and mutual respect. Democracy demands decorum. And since the president is supposed to be democracy’s leader, the candidates for the office have always done their best to, at least, put on a good act.

The author (Paul Berman) then proposes Theory III: Broad Cultural Collapse:

 A Theory 3 ought to emphasize still another non-economic and non-industrial factor, apart from marriage, family structure, theology, bad doctors, evil pharmaceutical companies, and racist ideology. This is a broad cultural collapse. It is a collapse, at minimum, of civic knowledge—a collapse in the ability to identify political reality, a collapse in the ability to recall the nature of democracy and the American ideal. An intellectual collapse, ultimately. And the sign of this collapse is an inability to recognize that Donald Trump has the look of a foreign object within the American presidential tradition.

Berman is insightful until he blames cultural collapse on the educational system (those dastardly teachers just decided not to teach about George Washington, I guess.)

We can’t blame education. Very few people had many years of formal education of any sort back in 1776 or 1810–even in 1900, far fewer people completed highschool than do today. The idea that highschool civics class was more effectively teaching future voters what to look for in a president in 1815 than today therefore seems unlikely.

If anything, in my (admittedly limited, parental) interactions with the local schools, education seem to lag national sentiment. For example, the local schools still cover Columbus Day in a pro-Columbus manner (and I don’t even live in a particularly conservative area) and have special Veterans’ Day events. School curricula are, I think, fairly influenced by the desires of the Texas schools, because Texas is a big state that buys a lot of textbooks.

I know plenty of Boomers who voted for Trump, so if we’re looking at a change in school curricula, we’re looking at a shift that happened half a century ago (or more,) but only recently manifested.

That said, I definitely feel something coursing through society that I could call “Cultural Collapse.” I just don’t think the schools are to blame.

Yesterday I happened across children’s book about famous musicians from the 1920s. Interwoven with the biographies of Beethoven and Mozart were political comments about kings and queens, European social structure and how these musicians of course saw through all of this royalty business and wanted to make music for the common people. It was an articulated ideology of democracy.

Sure, people today still think democracy is important, but the framing (and phrasing) is different. The book we recently read of mathematicians’ biographies didn’t stop to tell us how highly the mathematicians thought of the idea of common people voting (rather, when it bothered with ideology, it focused on increasing representation of women in mathematics and emphasizing the historical obstacles they faced.)

Meanwhile, as the NY Times reports, the percent of Americans who think living in a Democracy is important is declining:

According to the Mounk-Foa early-warning system, signs of democratic deconsolidation in the United States and many other liberal democracies are now similar to those in Venezuela before its crisis.

Across numerous countries, including Australia, Britain, the Netherlands, New Zealand, Sweden and the United States, the percentage of people who say it is “essential” to live in a democracy has plummeted, and it is especially low among younger generations. …

Support for autocratic alternatives is rising, too. Drawing on data from the European and World Values Surveys, the researchers found that the share of Americans who say that army rule would be a “good” or “very good” thing had risen to 1 in 6 in 2014, compared with 1 in 16 in 1995.

That trend is particularly strong among young people. For instance, in a previously published paper, the researchers calculated that 43 percent of older Americans believed it was illegitimate for the military to take over if the government were incompetent or failing to do its job, but only 19 percent of millennials agreed. The same generational divide showed up in Europe, where 53 percent of older people thought a military takeover would be illegitimate, while only 36 percent of millennials agreed.

Note, though, that this is not a local phenomenon–any explanation that explains why support for democracy is down in the US needs to also explain why it’s down in Sweden, Australia, Britain, and the Netherlands (and maybe why it wasn’t so popular there in the first place.)

Here are a few different theories besides failing schools:

  1. Less common culture, due to integration and immigration
  2. More international culture, due to the internet, TV, and similar technologies
  3. Disney

Put yourself in your grandfather or great-grandfather’s shoes, growing up in the 1910s or 20s. Cars were not yet common; chances were if he wanted to go somewhere, he walked or rode a horse. Telephones and radios were still rare. TV barely existed.

If you wanted to talk to someone, you walked over to them and talked. If you wanted to talk to someone from another town, either you or they had to travel, often by horse or wagon. For long-distance news, you had newspapers and a few telegraph wires.

News traveled slowly. People traveled slowly (most people didn’t ride trains regularly.) Most of the people you talked to were folks who lived nearby, in your own community. Everyone not from your community was some kind of outsider.

There’s a story from Albion’s Seed:

During World War II, for example, three German submariners escaped from Camp Crossville, Tennessee. Their flight took them to an Appalachian cabin, where they stopped for a drink of water. The mountain granny told them to git.” When they ignored her, she promptly shot them dead. The sheriff came, and scolded her for shooting helpless prisoners. Granny burst into tears, and said that she wold not have done it if she had known the were Germans. The exasperated sheriff asked her what in “tarnation” she thought she was shooting at. “Why,” she replied, “I thought they was Yankees!”

And then your grandfather got shipped out to get shot at somewhere in Europe or the Pacific.

Today, technology has completely transformed our lives. When we want to talk to someone or hear their opinion, we can just pick up the phone, visit facebook, or flip on the TV. We have daily commutes that would have taken our ancestors a week to walk. People expect to travel thousands of miles for college and jobs.

The effect is a curious inversion: In a world where you can talk to anyone, why talk to your neighbors? Personally, I spend more time talking to people in Britain than the folks next door, (and I like my neighbors.)

Now, this blog was practically founded on the idea that this technological shift in the way ideas (memes) are transmitted has a profound effect on the kinds of ideas that are transmitted. When ideas must be propagated between relatives and neighbors, these ideas are likely to promote your own material well-being (as you must survive well enough to continue propagating the idea for it to go on existing,) whereas when ideas can be easily transmitted between strangers who don’t even live near each other, the ideas need not promote personal survival–they just need to sound good. (I went into more detail on this idea back in Viruses Want you to Spread Them, Mitochondrial Memes, and The Progressive Virus.)

How do these technological shifts affect how we form communities?

From Bowling Alone:

In a groundbreaking book based on vast data, Putnam shows how we have become increasingly disconnected from family, friends, neighbors, and our democratic structures– and how we may reconnect.

Putnam warns that our stock of social capital – the very fabric of our connections with each other, has plummeted, impoverishing our lives and communities.

Putnam draws on evidence including nearly 500,000 interviews over the last quarter century to show that we sign fewer petitions, belong to fewer organizations that meet, know our neighbors less, meet with friends less frequently, and even socialize with our families less often. We’re even bowling alone. More Americans are bowling than ever before, but they are not bowling in leagues. Putnam shows how changes in work, family structure, age, suburban life, television, computers, women’s roles and other factors have contributed to this decline.

to data on how many people don’t have any friends:

The National Science Foundation (NSF) reported in its General Social Survey (GSS) that unprecedented numbers of Americans are lonely. Published in the American Sociological Review (ASR) and authored by Miller McPhearson, Lynn Smith-Lovin, and Matthew Brashears, sociologists at Duke and the University of Arizona, the study featured 1,500 face-to-face interviews where more than a quarter of the respondents — one in four — said that they have no one with whom they can talk about their personal troubles or triumphs. If family members are not counted, the number doubles to more than half of Americans who have no one outside their immediate family with whom they can share confidences. Sadly, the researchers noted increases in “social isolation” and “a very significant decrease in social connection to close friends and family.”

Rarely has news from an academic paper struck such a responsive nerve with the general public. These dramatic statistics from ASR parallel similar trends reported by the Beverly LaHaye Institute — that over the 40 years from 1960 to 2000 the Census Bureau had expanded its analysis of what had been a minor category.  The Census Bureau categorizes the term “unrelated individuals” to designate someone who does not live in a “family group.” Sadly, we’ve seen the percentage of persons living as “unrelated individuals” almost triple, increasing from 6 to 16 percent of all people during the last 40 years. A huge majority of those classified as “unrelated individuals” (about 70 percent) lived alone.

it seems that interpersonal trust is deteriorating:

Long-run data from the US, where the General Social Survey (GSS) has been gathering information about trust attitudes since 1972, suggests that people trust each other less today than 40 years ago. This decline in interpersonal trust in the US has been coupled with a long-run reduction in public trust in government – according to estimates compiled by the Pew Research Center since 1958, today trust in the government in the US is at historically low levels.

Interestingly:

Interpersonal trust attitudes correlate strongly with religious affiliation and upbringing. Some studies have shown that this strong positive relationship remains after controlling for several survey-respondent characteristics.1 This, in turn, has led researchers to use religion as a proxy for trust, in order to estimate the extent to which economic outcomes depend on trust attitudes. Estimates from these and other studies using an instrumental-variable approach, suggest that trust has a causal impact on economic outcomes.2 This suggests that the remarkable cross-country heterogeneity in trust that we observe today, can explain a significant part of the historical differences in cross-country income levels.

Also:

Measures of trust from attitudinal survey questions remain the most common source of data on trust. Yet academic studies have shown that these measures of trust are generally weak predictors of actual trusting behaviour. Interestingly, however, questions about trusting attitudes do seem to predict trustworthiness. In other words, people who say they trust other people tend to be trustworthy themselves.3

Just look at that horrible trend of migrants being kept out of Europe

Our technological shifts haven’t just affected ideas and conversations–with people able to travel thousands of miles in an afternoon, they’ve also affected the composition of communities. The US in 1920 was almost 90% white and 10% black, (with that black population concentrated in the segregated South). All other races together totaled only a couple percent. Today, the US is <65% white, 13% black, 16% Hispanic, 6% Asian and Native American, and 9% “other” or multi-racial.

Similar changes have happened in Europe, both with the creation of the Free Movement Zone and the discovery that the Mediterranean isn’t that hard to cross, though the composition of the newcomers obviously differs.

Diversity may have its benefits, but one of the things it isn’t is a common culture.

With all of these changes, do I really feel that there is anything particularly special about my local community and its norms over those of my British friends?

What about Disney?

Well, Disney’s most profitable product hasn’t exactly been pro-democracy, though I doubt a few princess movies can actually budge people’s political compasses or vote for Trump (or Hillary.) But what about the general content of children’s stories? It sure seems like there are a lot fewer stories focused on characters from American history than in the days when Davy Crockett was the biggest thing on TV.

Of course this loops back into technological changes, as American TV and movies are enjoyed by an increasingly non-American audience and media content is driven by advertisers’ desire to reach specific audiences (eg, the “rural purge” in TV programming, when popular TV shows aimed at more rural or older audiences were cancelled in favor of programs featuring urban characters, which advertisers believed would appeal to younger viewers with more cash to spend.)

If cultural collapse is happening, it’s not because we lack for civics classes, but because civics classes alone cannot create a civic culture where there is none.

Logan Paul and the Algorithms of Outrage

Leaving aside the issues of “Did Logan Paul actually do anything wrong?” and “Is changing YouTube’s policies actually in Game Theorist’s interests?” Game Theorist makes a good point: while YouTube might want to say, for PR reasons, that it is doing something about big, bad, controversial videos like Logan Paul’s, it also makes money off those same videos. YouTube–like many other parts of the internet–is primarily click driven. (Few of us are paying money for programs on YouTube Red.) YouTube wants views, and controversy drives views.

That doesn’t mean YouTube wants just any content–a reputation for having a bunch of pornography would probably have a damaging effect on channels aimed at small children, as their parents would click elsewhere. But aside from the actual corpse, Logan’s video wasn’t the sort of thing that would drive away small viewers–they’d get bored of the boring non-cartoons talking to the camera long before the suicide even came up.

Logan Paul actually managed to hit a very sweet spot: controversial enough to draw in visitors (tons of them) but not so controversial that he’d drive away other visitors.

In case you’ve forgotten the controversy in a fog of other controversies, LP’s video about accidentally finding a suicide in the Suicide Forest was initially well-received, racking up thousands of likes and views before someone got offended and started up the outrage machine. Once the outrage machine got going, public sentiment turned on a dime and LP was suddenly the subject of a full two or three days of Twitter hate. The hate, of course, got YouTube more views. LP took down the video and posted an apology–which generated more attention. Major media outlets were now covering the story. Even Tablet managed to quickly come up with an article: Want a New Years Resolution? Don’t be Like Logan Paul.

And it worked. I passed up Tablet’s regular article on Trump and Bagels and Culture, but I clicked on that article about Logan Paul because I wanted to know what on earth Tablet had to say about LP, a YouTuber whom, 24 hours prior, I had never heard of.

And the more respectable (or at least highly-trafficked) news outlets picked up the story, the higher Logan’s videos rose on the YouTube charts. And as more people watched more of LP’s other videos, they found more things to be offended at. For example, once he ran through the streets of Japan holding a fish. A FISH, I tell you. He waved this fish at people and was generally very annoying.

I don’t like LP’s style of humor, but I’m not getting worked up over a guy waving a fish around.

So understand this: you are in an outrage machine. The purpose of the outrage machine is to drive traffic, which makes clicks, which result in ad revenue. There are probably whole websites (Huffpo, CNN) that derive a significant percent of their profits from hate-clicks–that is, intentionally posting incendiary garbage not because they believe it or think it is just or true or appeals to their base, but because they can get people to click on it in sheer shock or outrage.

Your emotions–your “emotional labor” as the SJWs call it–is being turned into someone else’s dollars.

And the result is a country that is increasingly polarized. Increasingly outraged. Increasingly exhausted.

Step back for a moment. Take a deep breath. Get some fresh air. Ask yourself, “Does this really matter? Am I actually helping anyone? Will I remember this in a week?”

I’d blame the SJWs for the outrage machine–and really, they are good running it–but I think it started with CNN and “24 hour news.” You have to do something to fill that time. Then came Fox News, which was like CNN, but more controversial in order to lure viewers away from the more established channel. Now we have the interplay of Facebook, Twitter, HuffPo, online newspapers, YouTube, etc–driven largely by automated algorithms designed to maximized clicks–even hate clicks.

The Logan Paul controversy is just one example out of thousands, but let’s take a moment and think about whether it really mattered. Some guy whose job description is “makes videos of his life and posts them on YouTube” was already shooting a video about his camping trip when he happened upon a dead body. He filmed the body, called the police, canceled his camping trip, downed a few cups of sake while talking about how shaken he was, and ended the video with a plea that people seek help and not commit suicide.

In between these events was laughter–I interpret it as nervous laughter in an obviously distressed person. Other people interpret this as mocking. Even if you think LP was mocking the deceased, I think you should be more concerned that Japan has a “Suicide Forest” in the first place.

Let’s look at a similar case: When three year old Alan Kurdi drowned, the photograph of his dead body appeared on websites and newspapers around the world–earning thousands of dollars for the photographers and news agencies. Politicans then used little Alan’s death to push particular political agendas–Hillary Clinton even talked about Alan Kurdi’s death in one of the 2016 election debates. Alan Kurdi’s death was extremely profitable for everyone making money off the photograph, but no one got offended over this.

Why is it acceptable for photographers and media agencies to make money off a three year old boy who drowned because his father was a negligent fuck who didn’t put a life vest on him*, but not acceptable for Logan Paul to make money off a guy who chose to kill himself and then leave his body hanging in public where any random person could find it?

Elian Gonzalez, sobbing, torn at gunpoint from his relatives. BTW, This photo won the 2001 Pulitzer Prize for Breaking News.

Let’s take a more explicitly political case. Remember when Bill Clinton and Janet Reno sent 130 heavily armed INS agents to the home of child refugee Elian Gonzalez’s relatives** so they could kick him out of the US and send him back to Cuba?

Now Imagine Donald Trump sending SWAT teams after sobbing children. How would people react?

The outrage machine functions because people think it is good. It convinces people that it is casting light on terrible problems that need correcting. People are getting offended at things that they wouldn’t have if the outrage machine hadn’t told them to. You think you are serving justice. In reality, you are mad at a man for filming a dead guy and running around Japan with a fish. Jackass did worse, and it was on MTV for two years. Game Theorist wants more consequences for people like Logan Paul, but he doesn’t realize that anyone can get offended at just about anything. His videos have graphic descriptions of small children being murdered (in videogame contexts, like Five Nights at Freddy’s or “What would happen if the babies in Mario Cart were involved in real car crashes at racing speeds?”) I don’t find this “family friendly.” Sometimes I (*gasp*) turn off his videos as a result. Does that mean I want a Twitter mob to come destroy his livelihood? No. It means a Twitter mob could destroy his livelihood.

For that matter, as Game Theorist himself notes, the algorithm itself rewards and amplifies outrage–meaning that people are incentivised to create completely false outrage against innocent people. Punishing one group of people more because the algorithm encourages bad behavior in other people is cruel and does not solve the problem. Changing the algorithm would solve the problem, but the algorithm is what makes YouTube money.

In reality, the outrage machine is pulling the country apart–and I don’t know about you, but I live here. My stuff is here; my loved ones are here.

The outrage machine must stop.

*I remember once riding in an airplane with my father. As the flight crew explained that in the case of a sudden loss of cabin pressure, you should secure your own mask before assisting your neighbors, his response was a very vocal “Hell no, I’m saving my kid first.” Maybe not the best idea, but the sentiment is sound.

**When the boat Elian Gonzalez and his family were riding in capsized, his mother and her boyfriend put him in an inner tube, saving his life even though they drowned.

Testosterone metabolization, autism, male brain, and female identity

I began this post intending to write about testosterone metabolization in autism and possible connections with transgender identity, but realized halfway through that I didn’t actually know whether the autist-trans connection was primarily male-to-female or female-to-male. I had assumed that the relevant population is primarily MtF because both autists and trans people are primarily male, but both groups do have female populations that are large enough to contribute significantly. Here’s a sample of the data I’ve found so far:

A study conducted by a team of British scientists in 2012 found that of a pool of individuals not diagnosed on the autism spectrum, female-to-male (FTM) transgender people have higher rates of autistic features than do male-to-female (MTF) transgender people or cisgender males and females. Another study, which looked at children and adolescents admitted to a gender identity clinic in the Netherlands, found that almost 8 percent of subjects were also diagnosed with ASD.

Note that both of these studies are looking at trans people and assessing whether or not they have autism symptoms, not looking at autists and asking if they have trans symptoms. Given the characterization of autism as “extreme male brain” and that autism is diagnosed in males at about 4x the rate of females, the fact that there is some overlap between “women who think they think like men” and “traits associated with male thought patterns” is not surprising.

If the reported connection between autism and trans identity is just “autistic women feel like men,” that’s pretty non-mysterious and I just wasted an afternoon.

Though the data I have found so far still does not look directly at autists and ask how many of them have trans symptoms, the wikipedia page devoted to transgender and transsexual computer programmers lists only MtFs and no FtMs. Whether this is a pattern throughout the wider autism community, it definitely seems to be a thing among programmers. (Relevant discussion.)

So, returning to the original post:

Autism contains an amusing contradiction: on the one hand, autism is sometimes characterized as “extreme male brain,” and on the other hand, (some) autists (may be) more likely than neurotypicals to self-identify as transwomen–that is, biological men who see themselves as women. This seems contradictory: if autists are more masculine, mentally, than the average male, why don’t they identify as football players, army rangers, or something else equally masculine? For that matter, why isn’t a group with “extreme male brains” regarded as more, well, masculine?

(And if autists have extreme male brains, does that mean football players don’t? Do football players have more feminine brains than autists? Do colorless green ideas sleep furiously? DO WORDS MEAN?)

*Ahem*

In favor of the “extreme male brain” hypothesis, we have evidence that testosterone is important for certain brain functions, like spacial recognition, we have articles like this one: Testosterone and the brain:

Gender differences in spatial recognition, and age-related declines in cognition and mood, point towards testosterone as an important modulator of cerebral functions. Testosterone appears to activate a distributed cortical network, the ventral processing stream, during spatial cognition tasks, and addition of testosterone improves spatial cognition in younger and older hypogonadal men. In addition, reduced testosterone is associated with depressive disorders.

(Note that women also suffer depression at higher rates than men.)

So people with more testosterone are better at spacial cognition and other tasks that “autistic” brains typically excel at, and brains with less testosterone tend to be moody and depressed.

But hormones are tricky things. Where do they come from? Where do they go? How do we use them?

According to Wikipedia:

During the second trimester [of pregnancy], androgen level is associated with gender formation.[13] This period affects the femininization or masculinization of the fetus and can be a better predictor of feminine or masculine behaviours such as sex typed behaviour than an adult’s own levels. A mother’s testosterone level during pregnancy is correlated with her daughter’s sex-typical behavior as an adult, and the correlation is even stronger than with the daughter’s own adult testosterone level.[14]

… Early infancy androgen effects are the least understood. In the first weeks of life for male infants, testosterone levels rise. The levels remain in a pubertal range for a few months, but usually reach the barely detectable levels of childhood by 4–6 months of age.[15][16] The function of this rise in humans is unknown. It has been theorized that brain masculinization is occurring since no significant changes have been identified in other parts of the body.[17] The male brain is masculinized by the aromatization of testosterone into estrogen, which crosses the blood–brain barrier and enters the male brain, whereas female fetuses have α-fetoprotein, which binds the estrogen so that female brains are not affected.[18]

(Bold mine.)

Let’s re-read that: the male brain is masculinized by the aromatization of testosterone into estrogen.

If that’s not a weird sentence, I don’t know what is.

Let’s hop over to the scientific literature, eg, Estrogen Actions in the Brain and the Basis for Differential Action in Men and Women: A Case for Sex-Specific Medicines:

Burgeoning evidence now documents profound effects of estrogens on learning, memory, and mood as well as neurodevelopmental and neurodegenerative processes. Most data derive from studies in females, but there is mounting recognition that estrogens play important roles in the male brain, where they can be generated from circulating testosterone by local aromatase enzymes or synthesized de novo by neurons and glia. Estrogen-based therapy therefore holds considerable promise for brain disorders that affect both men and women. However, as investigations are beginning to consider the role of estrogens in the male brain more carefully, it emerges that they have different, even opposite, effects as well as similar effects in male and female brains. This review focuses on these differences, including sex dimorphisms in the ability of estradiol to influence synaptic plasticity, neurotransmission, neurodegeneration, and cognition, which, we argue, are due in a large part to sex differences in the organization of the underlying circuitry.

Hypothesis: the way testosterone works in the brain (where we both do math and “feel” male or female) and the way it works in the muscles might be very different.

Do autists actually differ from other people in testosterone (or other hormone) levels?

In Elevated rates of testosterone-related disorders in women with autism spectrum conditions, researchers surveyed autistic women and mothers of autistic children about various testosterone-related medical conditions:

Compared to controls, significantly more women with ASC [Autism Spectrum Conditions] reported (a) hirsutism, (b) bisexuality or asexuality, (c) irregular menstrual cycle, (d) dysmenorrhea, (e) polycystic ovary syndrome, (f) severe acne, (g) epilepsy, (h) tomboyism, and (i) family history of ovarian, uterine, and prostate cancers, tumors, or growths. Compared to controls, significantly more mothers of ASC children reported (a) severe acne, (b) breast and uterine cancers, tumors, or growths, and (c) family history of ovarian and uterine cancers, tumors, or growths.

Androgenic Activity in Autism has an unfortunately low number of subjects (N=9) but their results are nonetheless intriguing:

Three of the children had exhibited explosive aggression against others (anger, broken objects, violence toward others). Three engaged in self-mutilations, and three demonstrated no aggression and were in a severe state of autistic withdrawal. The appearance of aggression against others was associated with having fewer of the main symptoms of autism (autistic withdrawal, stereotypies, language dysfunctions).

Three of their subjects (they don’t say which, but presumably from the first group,) had abnormally high testosterone levels (including one of the girls in the study.) The other six subjects had normal androgen levels.

This is the first report of an association between abnormally high androgenic activity and aggression in subjects with autism. Although a previously reported study did not find group mean elevations in plasma testosterone in prepubertal autistic subjects (4), it appears here that in certain autistic individuals, especially those in puberty, hyperandrogeny may play a role in aggressive behaviors. Also, there appear to be distinct clinical forms of autism that are based on aggressive behaviors and are not classified in DSM-IV. Our preliminary findings suggest that abnormally high plasma testosterone concentration is associated with aggression against others and having fewer of the main autistic symptoms.

So, some autists have do have abnormally high testosterone levels, but those same autists are less autistic, overall, than other autists. More autistic behavior, aggression aside, is associated with normal hormone levels. Probably.

But of course that’s not fetal or early infancy testosterone levels. Unfortunately, it’s rather difficult to study fetal testosterone levels in autists, as few autists were diagnosed as fetuses. However, Foetal testosterone and autistic traits in 18 to 24-month-old children comes close:

Levels of FT [Fetal Testosterone] were analysed in amniotic fluid and compared with autistic traits, measured using the Quantitative Checklist for Autism in Toddlers (Q-CHAT) in 129 typically developing toddlers aged between 18 and 24 months (mean ± SD 19.25 ± 1.52 months). …

Sex differences were observed in Q-CHAT scores, with boys scoring significantly higher (indicating more autistic traits) than girls. In addition, we confirmed a significant positive relationship between FT levels and autistic traits.

I feel like this is veering into “we found that boys score higher on a test of male traits than girls did” territory, though.

In Polymorphisms in Genes Involved in Testosterone Metabolism in Slovak Autistic Boys, researchers found:

The present study evaluates androgen and estrogen levels in saliva as well as polymorphisms in genes for androgen receptor (AR), 5-alpha reductase (SRD5A2), and estrogen receptor alpha (ESR1) in the Slovak population of prepubertal (under 10 years) and pubertal (over 10 years) children with autism spectrum disorders. The examined prepubertal patients with autism, pubertal patients with autism, and prepubertal patients with Asperger syndrome had significantly increased levels of salivary testosterone (P < 0.05, P < 0.01, and P < 0.05, respectively) in comparison with control subjects. We found a lower number of (CAG)n repeats in the AR gene in boys with Asperger syndrome (P < 0.001). Autistic boys had an increased frequency of the T allele in the SRD5A2 gene in comparison with the control group. The frequencies of T and C alleles in ESR1 gene were comparable in all assessed groups.

What’s the significance of CAG repeats in the AR gene? Apparently they vary inversely with sensitivity to androgens:

Individuals with a lower number of CAG repeats exhibit higher AR gene expression levels and generate more functional AR receptors increasing their sensitivity to testosterone…

Fewer repeats, more sensitivity to androgens. The SRD5A2 gene is also involved in testosterone metabolization, though I’m not sure exactly what the T allele does relative to the other variants.

But just because there’s a lot of something in the blood (or saliva) doesn’t mean the body is using it. Diabetics can have high blood sugar because their bodies lack the necessary insulin to move the sugar from the blood, into their cells. Fewer androgen receptors could mean the body is metabolizing testosterone less effectively, which in turn leaves more of it floating in the blood… Biology is complicated.

What about estrogen and the autistic brain? That gets really complicated. According to Sex Hormones in Autism: Androgens and Estrogens Differentially and Reciprocally Regulate RORA, a Novel Candidate Gene for Autism:

Here, we show that male and female hormones differentially regulate the expression of a novel autism candidate gene, retinoic acid-related orphan receptor-alpha (RORA) in a neuronal cell line, SH-SY5Y. In addition, we demonstrate that RORA transcriptionally regulates aromatase, an enzyme that converts testosterone to estrogen. We further show that aromatase protein is significantly reduced in the frontal cortex of autistic subjects relative to sex- and age-matched controls, and is strongly correlated with RORA protein levels in the brain.

If autists are bad at converting testosterone to estrogen, this could leave extra testosterone floating around in their blood… but doens’t explain their supposed “extreme male brain.” Here’s another study on the same subject, since it’s confusing:

Comparing the brains of 13 children with and 13 children without autism spectrum disorder, the researchers found a 35 percent decrease in estrogen receptor beta expression as well as a 38 percent reduction in the amount of aromatase, the enzyme that converts testosterone to estrogen.

Levels of estrogen receptor beta proteins, the active molecules that result from gene expression and enable functions like brain protection, were similarly low. There was no discernable change in expression levels of estrogen receptor alpha, which mediates sexual behavior.

I don’t know if anyone has tried injecting RORA-deficient mice with estrogen, but here is a study about the effects of injecting reelin-deficient mice with estrogen:

The animals in the new studies, called ‘reeler’ mice, have one defective copy of the reelin gene and make about half the amount of reelin compared with controls. …

Reeler mice with one faulty copy serve as a model of one of the most well-established neuro-anatomical abnormalities in autism. Since the mid-1980s, scientists have known that people with autism have fewer Purkinje cells in the cerebellum than normal. These cells integrate information from throughout the cerebellum and relay it to other parts of the brain, particularly the cerebral cortex.

But there’s a twist: both male and female reeler mice have less reelin than control mice, but only the males lose Purkinje cells. …

In one of the studies, the researchers found that five days after birth, reeler mice have higher levels of testosterone in the cerebellum compared with genetically normal males3.

Keller’s team then injected estradiol — a form of the female sex hormone estrogen — into the brains of 5-day-old mice. In the male reeler mice, this treatment increases reelin levels in the cerebellum and partially blocks Purkinje cell loss. Giving more estrogen to female reeler mice has no effect — but females injected with tamoxifen, an estrogen blocker, lose Purkinje cells. …

In another study, the researchers investigated the effects of reelin deficiency and estrogen treatment on cognitive flexibility — the ability to switch strategies to solve a problem4. …

“And we saw indeed that the reeler mice are slower to switch. They tend to persevere in the old strategy,” Keller says. However, male reeler mice treated with estrogen at 5 days old show improved cognitive flexibility as adults, suggesting that the estrogen has a long-term effect.

This still doesn’t explain why autists would self-identify as transgender women (mtf) at higher rates than average, but it does suggest that any who do start hormone therapy might receive benefits completely independent of gender identity.

Let’s stop and step back a moment.

Autism is, unfortunately, badly defined. As the saying goes, if you’ve met one autist, you’ve met one autist. There are probably a variety of different, complicated things going on in the brains of different autists simply because a variety of different, complicated conditions are all being lumped together under a single label. Any mental disability that can include both non-verbal people who can barely dress and feed themselves and require lifetime care and billionaires like Bill Gates is a very badly defined condition.

(Unfortunately, people diagnose autism with questionnaires that include questions like “Is the child pedantic?” which could be equally true of both an autistic child and a child who is merely very smart and has learned more about a particular subject than their peers and so is responding in more detail than the adult is used to.)

The average autistic person is not a programmer. Autism is a disability, and the average diagnosed autist is pretty darn disabled. Among the people who have jobs and friends but nonetheless share some symptoms with formally diagnosed autists, though, programmer and the like appear to be pretty popular professions.

Back in my day, we just called these folks nerds.

Here’s a theory from a completely different direction: People feel the differences between themselves and a group they are supposed to fit into and associate with a lot more strongly than the differences between themselves and a distant group. Growing up, you probably got into more conflicts with your siblings and parents than with random strangers, even though–or perhaps because–your family is nearly identical to you genetically, culturally, and environmentally. “I am nothing like my brother!” a man declares, while simultaneously affirming that there is a great deal in common between himself and members of a race and culture from the other side of the planet. Your  coworker, someone specifically selected for the fact that they have similar mental and technical aptitudes and training as yourself, has a distinct list of traits that drive you nuts, from the way he staples papers to the way he pronounces his Ts, while the women of an obscure Afghan tribe of goat herders simply don’t enter your consciousness.

Nerds, somewhat by definition, don’t fit in. You don’t worry much about fitting into a group you’re not part of in the fist place–you probably don’t worry much about whether or not you fit in with Melanesian fishermen–but most people work hard at fitting in with their own group.

So if you’re male, but you don’t fit in with other males (say, because you’re a nerd,) and you’re down at the bottom of the highschool totem pole and feel like all of the women you’d like to date are judging you negatively next to the football players, then you might feel, rather strongly, the differences between you and other males. Other males are aggressive, they call you a faggot, they push you out of their spaces and threaten you with violence, and there’s very little you can do to respond besides retreat into your “nerd games.”

By contrast, women are polite to you, not aggressive, and don’t aggressively push you out of their spaces. Your differences with them are much less problematic, so you feel like you “fit in” with them.

(There is probably a similar dynamic at play with American men who are obsessed with anime. It’s not so much that they are truly into Japanese culture–which is mostly about quietly working hard–as they don’t fit in very well with their own culture.) (Note: not intended as a knock on anime, which certainly has some good works.)

And here’s another theory: autists have some interesting difficulties with constructing categories and making inferences from data. They also have trouble going along with the crowd, and may have fewer “mirror neurons” than normal people. So maybe autists just process the categories of “male” and “female” a little differently than everyone else, and in a small subset of autists, this results in trans identity.*

And another: maybe there are certain intersex disorders which result in differences in brain wiring/organization. (Yes, there are real interesx disorders, like Klinefelter’s, in which people have XXY chromosomes instead of XX or XY.) In a small set of cases, these unusually wired brains may be extremely good at doing certain tasks (like programming) resulting people who are both “autism spectrum” and “trans”. This is actually the theory I’ve been running with for years, though it is not incompatible with the hormonal theories discussed above.

But we are talking small: trans people of any sort are extremely rare, probably on the order of <1/1000. Even if autists were trans at 8 times the rates of non-autists, that’s still only 8/1000 or 1/125. Autists themselves are pretty rare (estimates vary, but the vast majority of people are not autistic at all,) so we are talking about a very small subset of a very small population in the first place. We only notice these correlations at all because the total population has gotten so huge.

Sometimes, extremely rare things are random chance.

Local Optima, Diversity, and Patchwork

Local optima–or optimums, if you prefer–are an illusion created by distance. A man standing on the hilltop at (approximately) X=2 may see land sloping downward all around himself and think that he is at the highest point on the graph.

But hand him a telescope, and he discovers that the fellow standing on the hilltop at X=4 is even higher than he is. And hand the fellow at X=4 a telescope, and he’ll discover that X=6 is even higher.

A global optimum is the best possible way of doing something; a local optimum can look like a global optimum because all of the other, similar ways of doing the same thing are worse. To get from a local optimum to a global optimum, you might have to endure a significant trough of things going worse before you reach your destination. (Those troughs would be the points X=3.03 and X=5.02 on the graph.) If the troughs are short and shallow enough, people can accidentally power their way through. If long and deep enough, people get stuck.

The introduction of new technology, exposure to another culture’s solutions, or even random chance can expose a local optimum and propel a group to cross that trough.

For example, back in 1400, Europeans were perfectly happy to get their Chinese silks, spices, and porcelains via the overland Silk Road. But with the fall of Constantinople to the Turks in 1453, the Silk Road became more fragmented and difficult (ie dangerous, ie expensive) to travel. The increased cost of the normal road prompted Europeans to start exploring other, less immediately profitable trade routes–like the possibility of sailing clear around the world, via the ocean, to the other side of China.

Without the eastern trade routes first diminishing in profitability, it wouldn’t have been economically viable to explore and develop the western routes. (With the discovery of the Americas, in the process, a happy accident.)

West Hunter (Greg Cochran) writes frequently about local optima; here’s an excerpt on plant domestication:

The reason that a few crops account for the great preponderance of modern agriculture is that a bird in the hand – an already-domesticated, already- optimized crop – feeds your family/makes money right now, while a potentially useful yet undomesticated crop doesn’t. One successful domestication tends to inhibit others that could flourish in the same niche. Several crops were domesticated in the eastern United States, but with the advent of maize and beans ( from Mesoamerica) most were abandoned. Maybe if those Amerindians had continued to selectively breed sumpweed for a few thousand years, it could have been a contender: but nobody is quite that stubborn.

Teosinte was an unpromising weed: it’s hard to see why anyone bothered to try to domesticate it, and it took a long time to turn it into something like modern maize. If someone had brought wheat to Mexico six thousand years ago, likely the locals would have dropped maize like a hot potato. But maize ultimately had advantages: it’s a C4 plant, while wheat is C3: maize yields can be much higher.

Teosinte is the ancestor of modern corn. Cochran’s point is that in the domestication game, wheat is a local optimum; given the wild ancestors of wheat and corn, you’d develop a better, more nutritious variety of wheat first and probably just abandon the corn. But if you didn’t have wheat and you just had corn, you’d keep at the corn–and in the end, get an even better plant.

(Of course, corn is a success story; plenty of people domesticated plants that actually weren’t very good just because that’s what they happened to have.)

Japan in 1850 was a culturally rich, pre-industrial, feudal society with a strong isolationist stance. In 1853, the Japanese discovered that the rest of the world’s industrial, military technology was now sufficiently advanced to pose a serious threat to Japanese sovereignty. Things immediately degenerated, culminating in the Boshin War (civil war, 1868-9,) but with the Meiji Restoration Japan embarked on an industrialization crash-course. By 1895, Japan had kicked China’s butt in the First Sino-Japanese War and the Japanese population doubled–after holding steady for centuries–between 1873 and 1935. (From 35 to 70 million people.) By the 1930s, Japan was one of the world’s most formidable industrial powers, and today it remains an economic and technological powerhouse.

Clearly the Japanese people, in 1850, contained the untapped ability to build a much more complex and advanced society than the one they had, and it did not take much exposure to the outside world to precipitate a total economic and technological revolution.

Sequoyah’s syllabary, showing script and print forms

A similar case occurred in 1821 when Sequoyah, a Cherokee man, invented his own syllabary (syllable-based alphabet) after observing American soldiers reading letters. The Cherokee quickly adopted Sequoyah’s writing system–by 1825, the majority of Cherokee were literate and the Cherokee had their own printing industry. Interestingly, although some of the Cherokee letters look like Latin, Greek, or Cyrillic letters, there is no correspondence in sound, because Sequoyah could not read English. He developed his entire syllabary after simply being exposed to the idea of writing.

The idea of literacy has occurred independently only a few times in human history; the vast majority of people picked up alphabets from someone else. Our Alphabet comes from the Latins who got it from the Greeks who adopted it from the Phoenicians who got it from some proto-canaanite script writers, and even then literacy spread pretty slowly. The Cherokee, while not as technologically advanced as Europeans at the time, were already a nice agricultural society and clearly possessed the ability to become literate as soon as they were exposed to the idea.

When I walk around our cities, I often think about what their ruins will look like to explorers in a thousand years
We also pass a ruin of what once must have been a grand building. The walls are marked with logos from a Belgian University. This must have once been some scientific study centre of sorts.”

By contrast, there are many cases of people being exposed to or given a new technology but completely lacking the ability to functionally adopt, improve, or maintain it. The Democratic Republic of the Congo, for example, is full of ruined colonial-era buildings and roads built by outsiders that the locals haven’t maintained. Without the Belgians, the infrastructure has crumbled.

Likewise, contact between Europeans and groups like the Australian Aboriginees did not result in the Aboriginees adopting European technology nor a new and improved fusion of Aboriginee and European tech, but in total disaster for the Aboriginees. While the Japanese consistently top the charts in educational attainment, Aboriginee communities are still struggling with low literacy rates, high dropout rates, and low employment–the modern industrial economy, in short, has not been kind to them.

Along a completely different evolutionary pathway, cephalopods–squids, octopuses, and their tentacled ilk–are the world’s smartest invertebrates. This is pretty amazing, given that their nearest cousins are snails and clams. Yet cephalopod intelligence only goes so far. No one knows (yet) just how smart cephalopods are–squids in particular are difficult to work with in captivity because they are active hunter/swimmers and need a lot more space than the average aquarium can devote–but their brain power appears to be on the order of a dog’s.

After millions of years of evolution, cephalopods may represent the best nature can do–with an invertebrate. Throw in a backbone, and an animal can get a whole lot smarter.

And in chemistry, activation energy is the amount of energy you have to put into a chemical system before a reaction can begin. Stable chemical systems essentially exist at local optima, and it can require the input of quite a lot of energy before you get any action out of them. For atoms, iron is the global–should we say universal?–optimum, beyond which reactions are endothermic rather than exothermic. In other words, nuclear fusion at the core of the sun ends with iron; elements heavier than iron can only be produced when stars explode.

So what do local optima have to do with diversity?

The current vogue for diversity (“Diversity is our greatest strength”) suggests that we can reach global optima faster by simply smushing everyone together and letting them compare notes. Scroll back to the Japanese case. Edo Japan had a nice culture, but it was also beset by frequent famines. Meiji Japan doubled its population. Giving everyone, right now, the same technology and culture would bring everyone up to the same level.

But you can’t tell from within if you are at a local or global optimum. That’s how they work. The Indians likely would have never developed corn had they been exposed to wheat early on, and subsequently Europeans would have never gotten to adopt corn, either. Good ideas can take a long time to refine and develop. Cultures can improve rapidly–even dramatically–by adopting each other’s good ideas, but they also need their own space and time to pursue their own paths, so that good but slowly developing ideas aren’t lost.

Which gets us back to Patchwork.