“Cultural Collapse”

Tablet recently had an interesting essay on the theme of “why did Trump win?”

The material-grievances theory and the cultural-resentments theory can fit together because, in both cases, they tell us that people voted for Trump out of a perceived self-interest, which was to improve their faltering economic and material conditions, or else to affirm their cultural standing vis-à-vis the non-whites and the bicoastal elites. Their votes were, from this standpoint, rationally cast. … which ultimately would suggest that 2016’s election was at least a semi-normal event, even if Trump has his oddities. But here is my reservation.

I do not think the election was normal. I think it was the strangest election in American history in at least one major particular, which has to do with the qualifications and demeanor of the winning candidate. American presidents over the centuries have always cultivated, after all, a style, which has been pretty much the style of George Washington, sartorially updated. … Now, it is possible that, over the centuries, appearances and reality have, on occasion, parted ways, and one or another president, in the privacy of his personal quarters, or in whispered instructions to his henchmen, has been, in fact, a lout, a demagogue, a thug, and a stinking cesspool of corruption. And yet, until just now, nobody running for the presidency, none of the serious candidates, would have wanted to look like that, and this was for a simple reason. The American project requires a rigorously republican culture, without which a democratic society cannot exist—a culture of honesty, logic, science, and open-minded debate, which requires, in turn, tolerance and mutual respect. Democracy demands decorum. And since the president is supposed to be democracy’s leader, the candidates for the office have always done their best to, at least, put on a good act.

The author (Paul Berman) then proposes Theory III: Broad Cultural Collapse:

 A Theory 3 ought to emphasize still another non-economic and non-industrial factor, apart from marriage, family structure, theology, bad doctors, evil pharmaceutical companies, and racist ideology. This is a broad cultural collapse. It is a collapse, at minimum, of civic knowledge—a collapse in the ability to identify political reality, a collapse in the ability to recall the nature of democracy and the American ideal. An intellectual collapse, ultimately. And the sign of this collapse is an inability to recognize that Donald Trump has the look of a foreign object within the American presidential tradition.

Berman is insightful until he blames cultural collapse on the educational system (those dastardly teachers just decided not to teach about George Washington, I guess.)

We can’t blame education. Very few people had many years of formal education of any sort back in 1776 or 1810–even in 1900, far fewer people completed highschool than do today. The idea that highschool civics class was more effectively teaching future voters what to look for in a president in 1815 than today therefore seems unlikely.

If anything, in my (admittedly limited, parental) interactions with the local schools, education seem to lag national sentiment. For example, the local schools still cover Columbus Day in a pro-Columbus manner (and I don’t even live in a particularly conservative area) and have special Veterans’ Day events. School curricula are, I think, fairly influenced by the desires of the Texas schools, because Texas is a big state that buys a lot of textbooks.

I know plenty of Boomers who voted for Trump, so if we’re looking at a change in school curricula, we’re looking at a shift that happened half a century ago (or more,) but only recently manifested.

That said, I definitely feel something coursing through society that I could call “Cultural Collapse.” I just don’t think the schools are to blame.

Yesterday I happened across children’s book about famous musicians from the 1920s. Interwoven with the biographies of Beethoven and Mozart were political comments about kings and queens, European social structure and how these musicians of course saw through all of this royalty business and wanted to make music for the common people. It was an articulated ideology of democracy.

Sure, people today still think democracy is important, but the framing (and phrasing) is different. The book we recently read of mathematicians’ biographies didn’t stop to tell us how highly the mathematicians thought of the idea of common people voting (rather, when it bothered with ideology, it focused on increasing representation of women in mathematics and emphasizing the historical obstacles they faced.)

Meanwhile, as the NY Times reports, the percent of Americans who think living in a Democracy is important is declining:

According to the Mounk-Foa early-warning system, signs of democratic deconsolidation in the United States and many other liberal democracies are now similar to those in Venezuela before its crisis.

Across numerous countries, including Australia, Britain, the Netherlands, New Zealand, Sweden and the United States, the percentage of people who say it is “essential” to live in a democracy has plummeted, and it is especially low among younger generations. …

Support for autocratic alternatives is rising, too. Drawing on data from the European and World Values Surveys, the researchers found that the share of Americans who say that army rule would be a “good” or “very good” thing had risen to 1 in 6 in 2014, compared with 1 in 16 in 1995.

That trend is particularly strong among young people. For instance, in a previously published paper, the researchers calculated that 43 percent of older Americans believed it was illegitimate for the military to take over if the government were incompetent or failing to do its job, but only 19 percent of millennials agreed. The same generational divide showed up in Europe, where 53 percent of older people thought a military takeover would be illegitimate, while only 36 percent of millennials agreed.

Note, though, that this is not a local phenomenon–any explanation that explains why support for democracy is down in the US needs to also explain why it’s down in Sweden, Australia, Britain, and the Netherlands (and maybe why it wasn’t so popular there in the first place.)

Here are a few different theories besides failing schools:

  1. Less common culture, due to integration and immigration
  2. More international culture, due to the internet, TV, and similar technologies
  3. Disney

Put yourself in your grandfather or great-grandfather’s shoes, growing up in the 1910s or 20s. Cars were not yet common; chances were if he wanted to go somewhere, he walked or rode a horse. Telephones and radios were still rare. TV barely existed.

If you wanted to talk to someone, you walked over to them and talked. If you wanted to talk to someone from another town, either you or they had to travel, often by horse or wagon. For long-distance news, you had newspapers and a few telegraph wires.

News traveled slowly. People traveled slowly (most people didn’t ride trains regularly.) Most of the people you talked to were folks who lived nearby, in your own community. Everyone not from your community was some kind of outsider.

There’s a story from Albion’s Seed:

During World War II, for example, three German submariners escaped from Camp Crossville, Tennessee. Their flight took them to an Appalachian cabin, where they stopped for a drink of water. The mountain granny told them to git.” When they ignored her, she promptly shot them dead. The sheriff came, and scolded her for shooting helpless prisoners. Granny burst into tears, and said that she wold not have done it if she had known the were Germans. The exasperated sheriff asked her what in “tarnation” she thought she was shooting at. “Why,” she replied, “I thought they was Yankees!”

And then your grandfather got shipped out to get shot at somewhere in Europe or the Pacific.

Today, technology has completely transformed our lives. When we want to talk to someone or hear their opinion, we can just pick up the phone, visit facebook, or flip on the TV. We have daily commutes that would have taken our ancestors a week to walk. People expect to travel thousands of miles for college and jobs.

The effect is a curious inversion: In a world where you can talk to anyone, why talk to your neighbors? Personally, I spend more time talking to people in Britain than the folks next door, (and I like my neighbors.)

Now, this blog was practically founded on the idea that this technological shift in the way ideas (memes) are transmitted has a profound effect on the kinds of ideas that are transmitted. When ideas must be propagated between relatives and neighbors, these ideas are likely to promote your own material well-being (as you must survive well enough to continue propagating the idea for it to go on existing,) whereas when ideas can be easily transmitted between strangers who don’t even live near each other, the ideas need not promote personal survival–they just need to sound good. (I went into more detail on this idea back in Viruses Want you to Spread Them, Mitochondrial Memes, and The Progressive Virus.)

How do these technological shifts affect how we form communities?

From Bowling Alone:

In a groundbreaking book based on vast data, Putnam shows how we have become increasingly disconnected from family, friends, neighbors, and our democratic structures– and how we may reconnect.

Putnam warns that our stock of social capital – the very fabric of our connections with each other, has plummeted, impoverishing our lives and communities.

Putnam draws on evidence including nearly 500,000 interviews over the last quarter century to show that we sign fewer petitions, belong to fewer organizations that meet, know our neighbors less, meet with friends less frequently, and even socialize with our families less often. We’re even bowling alone. More Americans are bowling than ever before, but they are not bowling in leagues. Putnam shows how changes in work, family structure, age, suburban life, television, computers, women’s roles and other factors have contributed to this decline.

to data on how many people don’t have any friends:

The National Science Foundation (NSF) reported in its General Social Survey (GSS) that unprecedented numbers of Americans are lonely. Published in the American Sociological Review (ASR) and authored by Miller McPhearson, Lynn Smith-Lovin, and Matthew Brashears, sociologists at Duke and the University of Arizona, the study featured 1,500 face-to-face interviews where more than a quarter of the respondents — one in four — said that they have no one with whom they can talk about their personal troubles or triumphs. If family members are not counted, the number doubles to more than half of Americans who have no one outside their immediate family with whom they can share confidences. Sadly, the researchers noted increases in “social isolation” and “a very significant decrease in social connection to close friends and family.”

Rarely has news from an academic paper struck such a responsive nerve with the general public. These dramatic statistics from ASR parallel similar trends reported by the Beverly LaHaye Institute — that over the 40 years from 1960 to 2000 the Census Bureau had expanded its analysis of what had been a minor category.  The Census Bureau categorizes the term “unrelated individuals” to designate someone who does not live in a “family group.” Sadly, we’ve seen the percentage of persons living as “unrelated individuals” almost triple, increasing from 6 to 16 percent of all people during the last 40 years. A huge majority of those classified as “unrelated individuals” (about 70 percent) lived alone.

it seems that interpersonal trust is deteriorating:

Long-run data from the US, where the General Social Survey (GSS) has been gathering information about trust attitudes since 1972, suggests that people trust each other less today than 40 years ago. This decline in interpersonal trust in the US has been coupled with a long-run reduction in public trust in government – according to estimates compiled by the Pew Research Center since 1958, today trust in the government in the US is at historically low levels.


Interpersonal trust attitudes correlate strongly with religious affiliation and upbringing. Some studies have shown that this strong positive relationship remains after controlling for several survey-respondent characteristics.1 This, in turn, has led researchers to use religion as a proxy for trust, in order to estimate the extent to which economic outcomes depend on trust attitudes. Estimates from these and other studies using an instrumental-variable approach, suggest that trust has a causal impact on economic outcomes.2 This suggests that the remarkable cross-country heterogeneity in trust that we observe today, can explain a significant part of the historical differences in cross-country income levels.


Measures of trust from attitudinal survey questions remain the most common source of data on trust. Yet academic studies have shown that these measures of trust are generally weak predictors of actual trusting behaviour. Interestingly, however, questions about trusting attitudes do seem to predict trustworthiness. In other words, people who say they trust other people tend to be trustworthy themselves.3

Just look at that horrible trend of migrants being kept out of Europe

Our technological shifts haven’t just affected ideas and conversations–with people able to travel thousands of miles in an afternoon, they’ve also affected the composition of communities. The US in 1920 was almost 90% white and 10% black, (with that black population concentrated in the segregated South). All other races together totaled only a couple percent. Today, the US is <65% white, 13% black, 16% Hispanic, 6% Asian and Native American, and 9% “other” or multi-racial.

Similar changes have happened in Europe, both with the creation of the Free Movement Zone and the discovery that the Mediterranean isn’t that hard to cross, though the composition of the newcomers obviously differs.

Diversity may have its benefits, but one of the things it isn’t is a common culture.

With all of these changes, do I really feel that there is anything particularly special about my local community and its norms over those of my British friends?

What about Disney?

Well, Disney’s most profitable product hasn’t exactly been pro-democracy, though I doubt a few princess movies can actually budge people’s political compasses or vote for Trump (or Hillary.) But what about the general content of children’s stories? It sure seems like there are a lot fewer stories focused on characters from American history than in the days when Davy Crockett was the biggest thing on TV.

Of course this loops back into technological changes, as American TV and movies are enjoyed by an increasingly non-American audience and media content is driven by advertisers’ desire to reach specific audiences (eg, the “rural purge” in TV programming, when popular TV shows aimed at more rural or older audiences were cancelled in favor of programs featuring urban characters, which advertisers believed would appeal to younger viewers with more cash to spend.)

If cultural collapse is happening, it’s not because we lack for civics classes, but because civics classes alone cannot create a civic culture where there is none.


Logan Paul and the Algorithms of Outrage

Leaving aside the issues of “Did Logan Paul actually do anything wrong?” and “Is changing YouTube’s policies actually in Game Theorist’s interests?” Game Theorist makes a good point: while YouTube might want to say, for PR reasons, that it is doing something about big, bad, controversial videos like Logan Paul’s, it also makes money off those same videos. YouTube–like many other parts of the internet–is primarily click driven. (Few of us are paying money for programs on YouTube Red.) YouTube wants views, and controversy drives views.

That doesn’t mean YouTube wants just any content–a reputation for having a bunch of pornography would probably have a damaging effect on channels aimed at small children, as their parents would click elsewhere. But aside from the actual corpse, Logan’s video wasn’t the sort of thing that would drive away small viewers–they’d get bored of the boring non-cartoons talking to the camera long before the suicide even came up.

Logan Paul actually managed to hit a very sweet spot: controversial enough to draw in visitors (tons of them) but not so controversial that he’d drive away other visitors.

In case you’ve forgotten the controversy in a fog of other controversies, LP’s video about accidentally finding a suicide in the Suicide Forest was initially well-received, racking up thousands of likes and views before someone got offended and started up the outrage machine. Once the outrage machine got going, public sentiment turned on a dime and LP was suddenly the subject of a full two or three days of Twitter hate. The hate, of course, got YouTube more views. LP took down the video and posted an apology–which generated more attention. Major media outlets were now covering the story. Even Tablet managed to quickly come up with an article: Want a New Years Resolution? Don’t be Like Logan Paul.

And it worked. I passed up Tablet’s regular article on Trump and Bagels and Culture, but I clicked on that article about Logan Paul because I wanted to know what on earth Tablet had to say about LP, a YouTuber whom, 24 hours prior, I had never heard of.

And the more respectable (or at least highly-trafficked) news outlets picked up the story, the higher Logan’s videos rose on the YouTube charts. And as more people watched more of LP’s other videos, they found more things to be offended at. For example, once he ran through the streets of Japan holding a fish. A FISH, I tell you. He waved this fish at people and was generally very annoying.

I don’t like LP’s style of humor, but I’m not getting worked up over a guy waving a fish around.

So understand this: you are in an outrage machine. The purpose of the outrage machine is to drive traffic, which makes clicks, which result in ad revenue. There are probably whole websites (Huffpo, CNN) that derive a significant percent of their profits from hate-clicks–that is, intentionally posting incendiary garbage not because they believe it or think it is just or true or appeals to their base, but because they can get people to click on it in sheer shock or outrage.

Your emotions–your “emotional labor” as the SJWs call it–is being turned into someone else’s dollars.

And the result is a country that is increasingly polarized. Increasingly outraged. Increasingly exhausted.

Step back for a moment. Take a deep breath. Get some fresh air. Ask yourself, “Does this really matter? Am I actually helping anyone? Will I remember this in a week?”

I’d blame the SJWs for the outrage machine–and really, they are good running it–but I think it started with CNN and “24 hour news.” You have to do something to fill that time. Then came Fox News, which was like CNN, but more controversial in order to lure viewers away from the more established channel. Now we have the interplay of Facebook, Twitter, HuffPo, online newspapers, YouTube, etc–driven largely by automated algorithms designed to maximized clicks–even hate clicks.

The Logan Paul controversy is just one example out of thousands, but let’s take a moment and think about whether it really mattered. Some guy whose job description is “makes videos of his life and posts them on YouTube” was already shooting a video about his camping trip when he happened upon a dead body. He filmed the body, called the police, canceled his camping trip, downed a few cups of sake while talking about how shaken he was, and ended the video with a plea that people seek help and not commit suicide.

In between these events was laughter–I interpret it as nervous laughter in an obviously distressed person. Other people interpret this as mocking. Even if you think LP was mocking the deceased, I think you should be more concerned that Japan has a “Suicide Forest” in the first place.

Let’s look at a similar case: When three year old Alan Kurdi drowned, the photograph of his dead body appeared on websites and newspapers around the world–earning thousands of dollars for the photographers and news agencies. Politicans then used little Alan’s death to push particular political agendas–Hillary Clinton even talked about Alan Kurdi’s death in one of the 2016 election debates. Alan Kurdi’s death was extremely profitable for everyone making money off the photograph, but no one got offended over this.

Why is it acceptable for photographers and media agencies to make money off a three year old boy who drowned because his father was a negligent fuck who didn’t put a life vest on him*, but not acceptable for Logan Paul to make money off a guy who chose to kill himself and then leave his body hanging in public where any random person could find it?

Elian Gonzalez, sobbing, torn at gunpoint from his relatives. BTW, This photo won the 2001 Pulitzer Prize for Breaking News.

Let’s take a more explicitly political case. Remember when Bill Clinton and Janet Reno sent 130 heavily armed INS agents to the home of child refugee Elian Gonzalez’s relatives** so they could kick him out of the US and send him back to Cuba?

Now Imagine Donald Trump sending SWAT teams after sobbing children. How would people react?

The outrage machine functions because people think it is good. It convinces people that it is casting light on terrible problems that need correcting. People are getting offended at things that they wouldn’t have if the outrage machine hadn’t told them to. You think you are serving justice. In reality, you are mad at a man for filming a dead guy and running around Japan with a fish. Jackass did worse, and it was on MTV for two years. Game Theorist wants more consequences for people like Logan Paul, but he doesn’t realize that anyone can get offended at just about anything. His videos have graphic descriptions of small children being murdered (in videogame contexts, like Five Nights at Freddy’s or “What would happen if the babies in Mario Cart were involved in real car crashes at racing speeds?”) I don’t find this “family friendly.” Sometimes I (*gasp*) turn off his videos as a result. Does that mean I want a Twitter mob to come destroy his livelihood? No. It means a Twitter mob could destroy his livelihood.

For that matter, as Game Theorist himself notes, the algorithm itself rewards and amplifies outrage–meaning that people are incentivised to create completely false outrage against innocent people. Punishing one group of people more because the algorithm encourages bad behavior in other people is cruel and does not solve the problem. Changing the algorithm would solve the problem, but the algorithm is what makes YouTube money.

In reality, the outrage machine is pulling the country apart–and I don’t know about you, but I live here. My stuff is here; my loved ones are here.

The outrage machine must stop.

*I remember once riding in an airplane with my father. As the flight crew explained that in the case of a sudden loss of cabin pressure, you should secure your own mask before assisting your neighbors, his response was a very vocal “Hell no, I’m saving my kid first.” Maybe not the best idea, but the sentiment is sound.

**When the boat Elian Gonzalez and his family were riding in capsized, his mother and her boyfriend put him in an inner tube, saving his life even though they drowned.

Testosterone metabolization, autism, male brain, and female identity

I began this post intending to write about testosterone metabolization in autism and possible connections with transgender identity, but realized halfway through that I didn’t actually know whether the autist-trans connection was primarily male-to-female or female-to-male. I had assumed that the relevant population is primarily MtF because both autists and trans people are primarily male, but both groups do have female populations that are large enough to contribute significantly. Here’s a sample of the data I’ve found so far:

A study conducted by a team of British scientists in 2012 found that of a pool of individuals not diagnosed on the autism spectrum, female-to-male (FTM) transgender people have higher rates of autistic features than do male-to-female (MTF) transgender people or cisgender males and females. Another study, which looked at children and adolescents admitted to a gender identity clinic in the Netherlands, found that almost 8 percent of subjects were also diagnosed with ASD.

Note that both of these studies are looking at trans people and assessing whether or not they have autism symptoms, not looking at autists and asking if they have trans symptoms. Given the characterization of autism as “extreme male brain” and that autism is diagnosed in males at about 4x the rate of females, the fact that there is some overlap between “women who think they think like men” and “traits associated with male thought patterns” is not surprising.

If the reported connection between autism and trans identity is just “autistic women feel like men,” that’s pretty non-mysterious and I just wasted an afternoon.

Though the data I have found so far still does not look directly at autists and ask how many of them have trans symptoms, the wikipedia page devoted to transgender and transsexual computer programmers lists only MtFs and no FtMs. Whether this is a pattern throughout the wider autism community, it definitely seems to be a thing among programmers. (Relevant discussion.)

So, returning to the original post:

Autism contains an amusing contradiction: on the one hand, autism is sometimes characterized as “extreme male brain,” and on the other hand, (some) autists (may be) more likely than neurotypicals to self-identify as transwomen–that is, biological men who see themselves as women. This seems contradictory: if autists are more masculine, mentally, than the average male, why don’t they identify as football players, army rangers, or something else equally masculine? For that matter, why isn’t a group with “extreme male brains” regarded as more, well, masculine?

(And if autists have extreme male brains, does that mean football players don’t? Do football players have more feminine brains than autists? Do colorless green ideas sleep furiously? DO WORDS MEAN?)


In favor of the “extreme male brain” hypothesis, we have evidence that testosterone is important for certain brain functions, like spacial recognition, we have articles like this one: Testosterone and the brain:

Gender differences in spatial recognition, and age-related declines in cognition and mood, point towards testosterone as an important modulator of cerebral functions. Testosterone appears to activate a distributed cortical network, the ventral processing stream, during spatial cognition tasks, and addition of testosterone improves spatial cognition in younger and older hypogonadal men. In addition, reduced testosterone is associated with depressive disorders.

(Note that women also suffer depression at higher rates than men.)

So people with more testosterone are better at spacial cognition and other tasks that “autistic” brains typically excel at, and brains with less testosterone tend to be moody and depressed.

But hormones are tricky things. Where do they come from? Where do they go? How do we use them?

According to Wikipedia:

During the second trimester [of pregnancy], androgen level is associated with gender formation.[13] This period affects the femininization or masculinization of the fetus and can be a better predictor of feminine or masculine behaviours such as sex typed behaviour than an adult’s own levels. A mother’s testosterone level during pregnancy is correlated with her daughter’s sex-typical behavior as an adult, and the correlation is even stronger than with the daughter’s own adult testosterone level.[14]

… Early infancy androgen effects are the least understood. In the first weeks of life for male infants, testosterone levels rise. The levels remain in a pubertal range for a few months, but usually reach the barely detectable levels of childhood by 4–6 months of age.[15][16] The function of this rise in humans is unknown. It has been theorized that brain masculinization is occurring since no significant changes have been identified in other parts of the body.[17] The male brain is masculinized by the aromatization of testosterone into estrogen, which crosses the blood–brain barrier and enters the male brain, whereas female fetuses have α-fetoprotein, which binds the estrogen so that female brains are not affected.[18]

(Bold mine.)

Let’s re-read that: the male brain is masculinized by the aromatization of testosterone into estrogen.

If that’s not a weird sentence, I don’t know what is.

Let’s hop over to the scientific literature, eg, Estrogen Actions in the Brain and the Basis for Differential Action in Men and Women: A Case for Sex-Specific Medicines:

Burgeoning evidence now documents profound effects of estrogens on learning, memory, and mood as well as neurodevelopmental and neurodegenerative processes. Most data derive from studies in females, but there is mounting recognition that estrogens play important roles in the male brain, where they can be generated from circulating testosterone by local aromatase enzymes or synthesized de novo by neurons and glia. Estrogen-based therapy therefore holds considerable promise for brain disorders that affect both men and women. However, as investigations are beginning to consider the role of estrogens in the male brain more carefully, it emerges that they have different, even opposite, effects as well as similar effects in male and female brains. This review focuses on these differences, including sex dimorphisms in the ability of estradiol to influence synaptic plasticity, neurotransmission, neurodegeneration, and cognition, which, we argue, are due in a large part to sex differences in the organization of the underlying circuitry.

Hypothesis: the way testosterone works in the brain (where we both do math and “feel” male or female) and the way it works in the muscles might be very different.

Do autists actually differ from other people in testosterone (or other hormone) levels?

In Elevated rates of testosterone-related disorders in women with autism spectrum conditions, researchers surveyed autistic women and mothers of autistic children about various testosterone-related medical conditions:

Compared to controls, significantly more women with ASC [Autism Spectrum Conditions] reported (a) hirsutism, (b) bisexuality or asexuality, (c) irregular menstrual cycle, (d) dysmenorrhea, (e) polycystic ovary syndrome, (f) severe acne, (g) epilepsy, (h) tomboyism, and (i) family history of ovarian, uterine, and prostate cancers, tumors, or growths. Compared to controls, significantly more mothers of ASC children reported (a) severe acne, (b) breast and uterine cancers, tumors, or growths, and (c) family history of ovarian and uterine cancers, tumors, or growths.

Androgenic Activity in Autism has an unfortunately low number of subjects (N=9) but their results are nonetheless intriguing:

Three of the children had exhibited explosive aggression against others (anger, broken objects, violence toward others). Three engaged in self-mutilations, and three demonstrated no aggression and were in a severe state of autistic withdrawal. The appearance of aggression against others was associated with having fewer of the main symptoms of autism (autistic withdrawal, stereotypies, language dysfunctions).

Three of their subjects (they don’t say which, but presumably from the first group,) had abnormally high testosterone levels (including one of the girls in the study.) The other six subjects had normal androgen levels.

This is the first report of an association between abnormally high androgenic activity and aggression in subjects with autism. Although a previously reported study did not find group mean elevations in plasma testosterone in prepubertal autistic subjects (4), it appears here that in certain autistic individuals, especially those in puberty, hyperandrogeny may play a role in aggressive behaviors. Also, there appear to be distinct clinical forms of autism that are based on aggressive behaviors and are not classified in DSM-IV. Our preliminary findings suggest that abnormally high plasma testosterone concentration is associated with aggression against others and having fewer of the main autistic symptoms.

So, some autists have do have abnormally high testosterone levels, but those same autists are less autistic, overall, than other autists. More autistic behavior, aggression aside, is associated with normal hormone levels. Probably.

But of course that’s not fetal or early infancy testosterone levels. Unfortunately, it’s rather difficult to study fetal testosterone levels in autists, as few autists were diagnosed as fetuses. However, Foetal testosterone and autistic traits in 18 to 24-month-old children comes close:

Levels of FT [Fetal Testosterone] were analysed in amniotic fluid and compared with autistic traits, measured using the Quantitative Checklist for Autism in Toddlers (Q-CHAT) in 129 typically developing toddlers aged between 18 and 24 months (mean ± SD 19.25 ± 1.52 months). …

Sex differences were observed in Q-CHAT scores, with boys scoring significantly higher (indicating more autistic traits) than girls. In addition, we confirmed a significant positive relationship between FT levels and autistic traits.

I feel like this is veering into “we found that boys score higher on a test of male traits than girls did” territory, though.

In Polymorphisms in Genes Involved in Testosterone Metabolism in Slovak Autistic Boys, researchers found:

The present study evaluates androgen and estrogen levels in saliva as well as polymorphisms in genes for androgen receptor (AR), 5-alpha reductase (SRD5A2), and estrogen receptor alpha (ESR1) in the Slovak population of prepubertal (under 10 years) and pubertal (over 10 years) children with autism spectrum disorders. The examined prepubertal patients with autism, pubertal patients with autism, and prepubertal patients with Asperger syndrome had significantly increased levels of salivary testosterone (P < 0.05, P < 0.01, and P < 0.05, respectively) in comparison with control subjects. We found a lower number of (CAG)n repeats in the AR gene in boys with Asperger syndrome (P < 0.001). Autistic boys had an increased frequency of the T allele in the SRD5A2 gene in comparison with the control group. The frequencies of T and C alleles in ESR1 gene were comparable in all assessed groups.

What’s the significance of CAG repeats in the AR gene? Apparently they vary inversely with sensitivity to androgens:

Individuals with a lower number of CAG repeats exhibit higher AR gene expression levels and generate more functional AR receptors increasing their sensitivity to testosterone…

Fewer repeats, more sensitivity to androgens. The SRD5A2 gene is also involved in testosterone metabolization, though I’m not sure exactly what the T allele does relative to the other variants.

But just because there’s a lot of something in the blood (or saliva) doesn’t mean the body is using it. Diabetics can have high blood sugar because their bodies lack the necessary insulin to move the sugar from the blood, into their cells. Fewer androgen receptors could mean the body is metabolizing testosterone less effectively, which in turn leaves more of it floating in the blood… Biology is complicated.

What about estrogen and the autistic brain? That gets really complicated. According to Sex Hormones in Autism: Androgens and Estrogens Differentially and Reciprocally Regulate RORA, a Novel Candidate Gene for Autism:

Here, we show that male and female hormones differentially regulate the expression of a novel autism candidate gene, retinoic acid-related orphan receptor-alpha (RORA) in a neuronal cell line, SH-SY5Y. In addition, we demonstrate that RORA transcriptionally regulates aromatase, an enzyme that converts testosterone to estrogen. We further show that aromatase protein is significantly reduced in the frontal cortex of autistic subjects relative to sex- and age-matched controls, and is strongly correlated with RORA protein levels in the brain.

If autists are bad at converting testosterone to estrogen, this could leave extra testosterone floating around in their blood… but doens’t explain their supposed “extreme male brain.” Here’s another study on the same subject, since it’s confusing:

Comparing the brains of 13 children with and 13 children without autism spectrum disorder, the researchers found a 35 percent decrease in estrogen receptor beta expression as well as a 38 percent reduction in the amount of aromatase, the enzyme that converts testosterone to estrogen.

Levels of estrogen receptor beta proteins, the active molecules that result from gene expression and enable functions like brain protection, were similarly low. There was no discernable change in expression levels of estrogen receptor alpha, which mediates sexual behavior.

I don’t know if anyone has tried injecting RORA-deficient mice with estrogen, but here is a study about the effects of injecting reelin-deficient mice with estrogen:

The animals in the new studies, called ‘reeler’ mice, have one defective copy of the reelin gene and make about half the amount of reelin compared with controls. …

Reeler mice with one faulty copy serve as a model of one of the most well-established neuro-anatomical abnormalities in autism. Since the mid-1980s, scientists have known that people with autism have fewer Purkinje cells in the cerebellum than normal. These cells integrate information from throughout the cerebellum and relay it to other parts of the brain, particularly the cerebral cortex.

But there’s a twist: both male and female reeler mice have less reelin than control mice, but only the males lose Purkinje cells. …

In one of the studies, the researchers found that five days after birth, reeler mice have higher levels of testosterone in the cerebellum compared with genetically normal males3.

Keller’s team then injected estradiol — a form of the female sex hormone estrogen — into the brains of 5-day-old mice. In the male reeler mice, this treatment increases reelin levels in the cerebellum and partially blocks Purkinje cell loss. Giving more estrogen to female reeler mice has no effect — but females injected with tamoxifen, an estrogen blocker, lose Purkinje cells. …

In another study, the researchers investigated the effects of reelin deficiency and estrogen treatment on cognitive flexibility — the ability to switch strategies to solve a problem4. …

“And we saw indeed that the reeler mice are slower to switch. They tend to persevere in the old strategy,” Keller says. However, male reeler mice treated with estrogen at 5 days old show improved cognitive flexibility as adults, suggesting that the estrogen has a long-term effect.

This still doesn’t explain why autists would self-identify as transgender women (mtf) at higher rates than average, but it does suggest that any who do start hormone therapy might receive benefits completely independent of gender identity.

Let’s stop and step back a moment.

Autism is, unfortunately, badly defined. As the saying goes, if you’ve met one autist, you’ve met one autist. There are probably a variety of different, complicated things going on in the brains of different autists simply because a variety of different, complicated conditions are all being lumped together under a single label. Any mental disability that can include both non-verbal people who can barely dress and feed themselves and require lifetime care and billionaires like Bill Gates is a very badly defined condition.

(Unfortunately, people diagnose autism with questionnaires that include questions like “Is the child pedantic?” which could be equally true of both an autistic child and a child who is merely very smart and has learned more about a particular subject than their peers and so is responding in more detail than the adult is used to.)

The average autistic person is not a programmer. Autism is a disability, and the average diagnosed autist is pretty darn disabled. Among the people who have jobs and friends but nonetheless share some symptoms with formally diagnosed autists, though, programmer and the like appear to be pretty popular professions.

Back in my day, we just called these folks nerds.

Here’s a theory from a completely different direction: People feel the differences between themselves and a group they are supposed to fit into and associate with a lot more strongly than the differences between themselves and a distant group. Growing up, you probably got into more conflicts with your siblings and parents than with random strangers, even though–or perhaps because–your family is nearly identical to you genetically, culturally, and environmentally. “I am nothing like my brother!” a man declares, while simultaneously affirming that there is a great deal in common between himself and members of a race and culture from the other side of the planet. Your  coworker, someone specifically selected for the fact that they have similar mental and technical aptitudes and training as yourself, has a distinct list of traits that drive you nuts, from the way he staples papers to the way he pronounces his Ts, while the women of an obscure Afghan tribe of goat herders simply don’t enter your consciousness.

Nerds, somewhat by definition, don’t fit in. You don’t worry much about fitting into a group you’re not part of in the fist place–you probably don’t worry much about whether or not you fit in with Melanesian fishermen–but most people work hard at fitting in with their own group.

So if you’re male, but you don’t fit in with other males (say, because you’re a nerd,) and you’re down at the bottom of the highschool totem pole and feel like all of the women you’d like to date are judging you negatively next to the football players, then you might feel, rather strongly, the differences between you and other males. Other males are aggressive, they call you a faggot, they push you out of their spaces and threaten you with violence, and there’s very little you can do to respond besides retreat into your “nerd games.”

By contrast, women are polite to you, not aggressive, and don’t aggressively push you out of their spaces. Your differences with them are much less problematic, so you feel like you “fit in” with them.

(There is probably a similar dynamic at play with American men who are obsessed with anime. It’s not so much that they are truly into Japanese culture–which is mostly about quietly working hard–as they don’t fit in very well with their own culture.) (Note: not intended as a knock on anime, which certainly has some good works.)

And here’s another theory: autists have some interesting difficulties with constructing categories and making inferences from data. They also have trouble going along with the crowd, and may have fewer “mirror neurons” than normal people. So maybe autists just process the categories of “male” and “female” a little differently than everyone else, and in a small subset of autists, this results in trans identity.*

And another: maybe there are certain intersex disorders which result in differences in brain wiring/organization. (Yes, there are real interesx disorders, like Klinefelter’s, in which people have XXY chromosomes instead of XX or XY.) In a small set of cases, these unusually wired brains may be extremely good at doing certain tasks (like programming) resulting people who are both “autism spectrum” and “trans”. This is actually the theory I’ve been running with for years, though it is not incompatible with the hormonal theories discussed above.

But we are talking small: trans people of any sort are extremely rare, probably on the order of <1/1000. Even if autists were trans at 8 times the rates of non-autists, that’s still only 8/1000 or 1/125. Autists themselves are pretty rare (estimates vary, but the vast majority of people are not autistic at all,) so we are talking about a very small subset of a very small population in the first place. We only notice these correlations at all because the total population has gotten so huge.

Sometimes, extremely rare things are random chance.


Local Optima, Diversity, and Patchwork

Local optima–or optimums, if you prefer–are an illusion created by distance. A man standing on the hilltop at (approximately) X=2 may see land sloping downward all around himself and think that he is at the highest point on the graph.

But hand him a telescope, and he discovers that the fellow standing on the hilltop at X=4 is even higher than he is. And hand the fellow at X=4 a telescope, and he’ll discover that X=6 is even higher.

A global optimum is the best possible way of doing something; a local optimum can look like a global optimum because all of the other, similar ways of doing the same thing are worse. To get from a local optimum to a global optimum, you might have to endure a significant trough of things going worse before you reach your destination. (Those troughs would be the points X=3.03 and X=5.02 on the graph.) If the troughs are short and shallow enough, people can accidentally power their way through. If long and deep enough, people get stuck.

The introduction of new technology, exposure to another culture’s solutions, or even random chance can expose a local optimum and propel a group to cross that trough.

For example, back in 1400, Europeans were perfectly happy to get their Chinese silks, spices, and porcelains via the overland Silk Road. But with the fall of Constantinople to the Turks in 1453, the Silk Road became more fragmented and difficult (ie dangerous, ie expensive) to travel. The increased cost of the normal road prompted Europeans to start exploring other, less immediately profitable trade routes–like the possibility of sailing clear around the world, via the ocean, to the other side of China.

Without the eastern trade routes first diminishing in profitability, it wouldn’t have been economically viable to explore and develop the western routes. (With the discovery of the Americas, in the process, a happy accident.)

West Hunter (Greg Cochran) writes frequently about local optima; here’s an excerpt on plant domestication:

The reason that a few crops account for the great preponderance of modern agriculture is that a bird in the hand – an already-domesticated, already- optimized crop – feeds your family/makes money right now, while a potentially useful yet undomesticated crop doesn’t. One successful domestication tends to inhibit others that could flourish in the same niche. Several crops were domesticated in the eastern United States, but with the advent of maize and beans ( from Mesoamerica) most were abandoned. Maybe if those Amerindians had continued to selectively breed sumpweed for a few thousand years, it could have been a contender: but nobody is quite that stubborn.

Teosinte was an unpromising weed: it’s hard to see why anyone bothered to try to domesticate it, and it took a long time to turn it into something like modern maize. If someone had brought wheat to Mexico six thousand years ago, likely the locals would have dropped maize like a hot potato. But maize ultimately had advantages: it’s a C4 plant, while wheat is C3: maize yields can be much higher.

Teosinte is the ancestor of modern corn. Cochran’s point is that in the domestication game, wheat is a local optimum; given the wild ancestors of wheat and corn, you’d develop a better, more nutritious variety of wheat first and probably just abandon the corn. But if you didn’t have wheat and you just had corn, you’d keep at the corn–and in the end, get an even better plant.

(Of course, corn is a success story; plenty of people domesticated plants that actually weren’t very good just because that’s what they happened to have.)

Japan in 1850 was a culturally rich, pre-industrial, feudal society with a strong isolationist stance. In 1853, the Japanese discovered that the rest of the world’s industrial, military technology was now sufficiently advanced to pose a serious threat to Japanese sovereignty. Things immediately degenerated, culminating in the Boshin War (civil war, 1868-9,) but with the Meiji Restoration Japan embarked on an industrialization crash-course. By 1895, Japan had kicked China’s butt in the First Sino-Japanese War and the Japanese population doubled–after holding steady for centuries–between 1873 and 1935. (From 35 to 70 million people.) By the 1930s, Japan was one of the world’s most formidable industrial powers, and today it remains an economic and technological powerhouse.

Clearly the Japanese people, in 1850, contained the untapped ability to build a much more complex and advanced society than the one they had, and it did not take much exposure to the outside world to precipitate a total economic and technological revolution.

Sequoyah’s syllabary, showing script and print forms

A similar case occurred in 1821 when Sequoyah, a Cherokee man, invented his own syllabary (syllable-based alphabet) after observing American soldiers reading letters. The Cherokee quickly adopted Sequoyah’s writing system–by 1825, the majority of Cherokee were literate and the Cherokee had their own printing industry. Interestingly, although some of the Cherokee letters look like Latin, Greek, or Cyrillic letters, there is no correspondence in sound, because Sequoyah could not read English. He developed his entire syllabary after simply being exposed to the idea of writing.

The idea of literacy has occurred independently only a few times in human history; the vast majority of people picked up alphabets from someone else. Our Alphabet comes from the Latins who got it from the Greeks who adopted it from the Phoenicians who got it from some proto-canaanite script writers, and even then literacy spread pretty slowly. The Cherokee, while not as technologically advanced as Europeans at the time, were already a nice agricultural society and clearly possessed the ability to become literate as soon as they were exposed to the idea.

When I walk around our cities, I often think about what their ruins will look like to explorers in a thousand years
We also pass a ruin of what once must have been a grand building. The walls are marked with logos from a Belgian University. This must have once been some scientific study centre of sorts.”

By contrast, there are many cases of people being exposed to or given a new technology but completely lacking the ability to functionally adopt, improve, or maintain it. The Democratic Republic of the Congo, for example, is full of ruined colonial-era buildings and roads built by outsiders that the locals haven’t maintained. Without the Belgians, the infrastructure has crumbled.

Likewise, contact between Europeans and groups like the Australian Aboriginees did not result in the Aboriginees adopting European technology nor a new and improved fusion of Aboriginee and European tech, but in total disaster for the Aboriginees. While the Japanese consistently top the charts in educational attainment, Aboriginee communities are still struggling with low literacy rates, high dropout rates, and low employment–the modern industrial economy, in short, has not been kind to them.

Along a completely different evolutionary pathway, cephalopods–squids, octopuses, and their tentacled ilk–are the world’s smartest invertebrates. This is pretty amazing, given that their nearest cousins are snails and clams. Yet cephalopod intelligence only goes so far. No one knows (yet) just how smart cephalopods are–squids in particular are difficult to work with in captivity because they are active hunter/swimmers and need a lot more space than the average aquarium can devote–but their brain power appears to be on the order of a dog’s.

After millions of years of evolution, cephalopods may represent the best nature can do–with an invertebrate. Throw in a backbone, and an animal can get a whole lot smarter.

And in chemistry, activation energy is the amount of energy you have to put into a chemical system before a reaction can begin. Stable chemical systems essentially exist at local optima, and it can require the input of quite a lot of energy before you get any action out of them. For atoms, iron is the global–should we say universal?–optimum, beyond which reactions are endothermic rather than exothermic. In other words, nuclear fusion at the core of the sun ends with iron; elements heavier than iron can only be produced when stars explode.

So what do local optima have to do with diversity?

The current vogue for diversity (“Diversity is our greatest strength”) suggests that we can reach global optima faster by simply smushing everyone together and letting them compare notes. Scroll back to the Japanese case. Edo Japan had a nice culture, but it was also beset by frequent famines. Meiji Japan doubled its population. Giving everyone, right now, the same technology and culture would bring everyone up to the same level.

But you can’t tell from within if you are at a local or global optimum. That’s how they work. The Indians likely would have never developed corn had they been exposed to wheat early on, and subsequently Europeans would have never gotten to adopt corn, either. Good ideas can take a long time to refine and develop. Cultures can improve rapidly–even dramatically–by adopting each other’s good ideas, but they also need their own space and time to pursue their own paths, so that good but slowly developing ideas aren’t lost.

Which gets us back to Patchwork.


Do Sufficiently Large Organizations Start Acting Like Malevolent AIs? (pt 2)

(Part 1 is here)

As we were discussing on Monday, as our networks have become more effective, our ability to incorporate new information may have actually gone down. Ironically, as we add more people to a group–beyond a certain limit–it becomes more difficult for individuals with particular expertise to convince everyone else in the group that the group’s majority consensus is wrong.

The difficulties large groups experience trying to coordinate and share information force them to become dominated by procedures–set rules of behavior and operation are necessary for large groups to operate. A group of three people can use ad-hoc consensus and rock-paper-scissors to make decisions; a nation of 320 million requires a complex body of laws and regulations. (I once tried to figure out just how many laws and regulations America has. The answer I found was that no one knows.)

An organization is initially founded to accomplish some purpose that benefits its founders–generally to make them well-off, but often also to produce some useful good or service. A small organization is lean, efficient, and generally exemplifies the ideals put forth in Adam Smith’s invisible hand:

It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest. We address ourselves, not to their humanity but to their self-love, and never talk to them of our necessities but of their advantages. —The Wealth Of Nations, Book I

As an organization ages and grows, its founders retire or move on, it becomes more dependent on policies and regulations and each individual employee finds his own incentives further displaced from the company’s original intentions. Soon a company is no longer devoted to either the well-being of its founders or its customers, but to the company itself. (And that’s kind of a best-case scenario in which the company doesn’t just disintegrate into individual self-interest.)

I am reminded of a story about a computer that had been programmed to play Tetris–actually, it had been programmed not to lose at Tetris. So the computer paused the game. A paused game cannot lose.

What percentage of employees (especially management) have been incentivized to win? And what percentage are being incentivized to not lose?

And no, I don’t mean that in some 80s buzzword-esque way. Most employees have more to lose (ie, their jobs) if something goes wrong as a result of their actions than to gain if something goes right. The stockholders might hope that employees are doing everything they can to maximize profits, but really, most people are trying not to mess up and get fired.

Fear of messing up goes beyond the individual scale. Whole companies are goaded by concerns about risk–“Could we get sued?” Large corporation have entire legal teams devoted to telling them how they could get sued for whatever their doing and to filing lawsuits against their competitors for whatever they’re doing.

This fear of risk carries over, in turn, to government regulations. As John Sanphillipo writes in City Regulatory Hurdles Favor Big Developers, not the Little Guy:

A family in a town I visited bought an old fire station a few years ago with the intention of turning it into a Portuguese bakery and brewpub. They thought they’d have to retrofit the interior of the building to meet health and safety standards for such an establishment.

Turns out the cost of bringing the landscape around the outside of the building up to code was their primary impediment. Mandatory parking requirements, sidewalks, curb cuts, fire lanes, on-site stormwater management, handicapped accessibility, drought-tolerant native plantings…it’s a very long list that totaled $340,000 worth of work. … Guess what? They decided not to open the bakery or brewery. …

Individually it’s impossible to argue against each of the particulars. Do you really want to deprive people in wheelchairs of the basic civil right of public accommodation? Do you really want the place to catch fire and burn? Do you want a barren landscape that’s bereft of vegetation? …

I was in Hamtramck, Michigan a couple of years ago to participate in a seminar about reactivating neighborhoods through incremental small-scale development. …

While the event was underway the fire marshal happened to drive by and noticed there were people—a few dozen actual humans—occupying a commercial building in broad daylight. In a town that has seen decades of depopulation and disinvestment, this was an odd sight. And he was worried. Do people have permission for this kind of activity? Had there been an inspection? Was a permit issued? Is everything insured? He called one of his superiors to see if he should shut things down in the name of public safety.

It’s a good article. You should read the whole thing.

Back in Phillipe Bourgeois’s In Search of Respect: Selling Crack in el Barrio, Phillipe describes one drug dealer’s attempt to use the money he’d made to go into honest business by opening a convenience store. Unfortunately, he couldn’t get the store complaint with NYC disability-access regulations, and so the store never opened and the owner went back to dealing drugs. (What IQ, I wonder, is necessary to comply with all of these laws and regulations in the first place?)

Now, I’m definitely in favor of disabled people being able to buy groceries and use bathrooms. But what benefits a disabled person more: a convenience store that’s not fully wheel-chair accessible, or a crack house?

In My IRB Nightmare, Scott Alexander writes about trying to do a simple study to determine whether the screening test already being used to diagnose people with bipolar disorder is effective at diagnosing them:

When we got patients, I would give them the bipolar screening exam and record the results. Then Dr. W. would conduct a full clinical interview and formally assess them. We’d compare notes and see how often the screening test results matched Dr. W’s expert diagnosis.

Remember, they were already using the screening test on patients and then having them talk to the doctor for a formal assessment. The only thing the study added was that Scott would compare how well the screening results matched the formal assessment. No patients would be injected, subject to new procedures, or even asked different questions. They just wanted to compare two data sets.

After absurd quantities of paperwork and an approval process much too long to summarize here, the project got audited:

I kept the audit report as a souvenier. I have it in front of me now. Here’s an example infraction:

The data and safety monitoring plan consists of ‘the Principal Investigator will randomly check data integrity’. This is a prospective study with a vulnerable group (mental illness, likely to have diminished capacity, likely to be low income) and, as such, would warrant a more rigorous monitoring plan than what is stated above. In addition to the above, a more adequate plan for this study would also include review of the protocol at regular intervals, on-going checking of any participant complaints or difficulties with the study, monitoring that the approved data variables are the only ones being collected, regular study team meetings to discuss progress and any deviations or unexpected problems. Team meetings help to assure participant protections, adherence to the protocol. Having an adequate monitoring plan is a federal requirement for the approval of a study. See Regulation 45 CFR 46.111 Criteria For IRB Approval Of Research. IRB Policy: PI Qualifications And Responsibility In Conducting Research. Please revise the protocol via a protocol revision request form. Recommend that periodic meetings with the research team occur and be documented.

… Faced with submitting twenty-seven new pieces of paperwork to correct our twenty-seven infractions, Dr. W and I gave up. We shredded the patient data and the Secret Code Log. We told all the newbies they could give up and go home. … We told the IRB that they had won, fair and square; we surrendered unconditionally.

The point of all that paperwork and supervision is to make sure that no one replicates the Tuskegee Syphilis Experiment nor the Nazi anything. Noble sentiments–but as a result, a study comparing two data sets had to be canceled.

I’ve noticed recently that much of the interesting medical research is happening in the third world/China–places where the regulations aren’t as strong and experiments (of questionable ethics or not) can actually get done.

Like the computer taught not to lose at Tetris, all of these systems are more focused on minimizing risk–even non-existent risk–than on actually succeeding.

In his review of Yudkowsky’s Inadequate Equilibria, Scott writes:

…[Yudkowsky] continues to the case of infant parenteral nutrition. Some babies have malformed digestive systems and need to have nutrient fluid pumped directly into their veins. The nutrient fluid formula used in the US has the wrong kinds of lipids in it, and about a third of babies who get it die of brain or liver damage. We’ve known for decades that the nutrient fluid formula has the wrong kind of lipids. We know the right kind of lipids and they’re incredibly cheap and there is no reason at all that we couldn’t put them in the nutrient fluid formula. We’ve done a bunch of studies showing that when babies get the right nutrient fluid formula, the 33% death rate disappears. But the only FDA-approved nutrient fluid formula is the one with the wrong lipids, so we just keep giving it to babies, and they just keep dying. Grant that the FDA is terrible and ruins everything, but over several decades of knowing about this problem and watching the dead babies pile up, shouldn’t somebody have done something to make this system work better?

The doctors have to use the FDA-approved formula or they could get sued for malpractice. The insurance companies, of course, only cover the FDA-approved formula. The formula makers are already making money selling the current formula and would probably have to go through an expensive, multi-year review system (with experiments far more regulated than Scott’s) to get the new formula approved, and even then they might not actually get approval. In short, on one side are people in official positions of power whose lives could be made worse (or less convenient) if they tried to fix the problem, and on the other side are dead babies who can’t stand up for themselves.

The Chankiri Tree (Killing Tree) where infants were fatally smashed, Choeung Ek, Cambodia.

Communism strikes me as the ultimate expression of this beast: a society fully transformed into a malevolent AI. It’s impossible to determine exactly how many people were murdered by communism, but the Black Book of Communism estimates a death toll between 85 and 100 million people.

Capitalism, for all its faults, is at least somewhat decentralized. If you make a bad business decision, you suffer the consequences and can hopefully learn from your mistakes and make better decisions in the future. But in communist systems, one central planner’s bad decisions can cause suffering for millions of other people, resulting in mass death. Meanwhile, the central planner may suffer for correcting the bad decision. Centralized economies simply lack the feedback loops necessary to fix problems before they start killing people.

While FDA oversight of medicines is probably important, would it be such a bad thing if a slightly freer market in parenteral nutrition allowed parents to chose between competing brands of formula, each promising not to kill your baby?

Of course, capitalism isn’t perfect, either. SpottedToad recently had an interesting post, 2010s Identity Politics as Hostile AI:

There’s an interesting post mortem on the rise and fall of the clickbait liberalism site Mic.com, that attracted an alleged 65 million unique visitors on the strength of Woketastic personal stories like “5 Powerful Reasons I’m a (Male) Feminist,” …

Every time Mic had a hit, it would distill that success into a formula and then replicate it until it was dead. Successful “frameworks,” or headlines, that went through this process included “Science Proves TK,” “In One Perfect Tweet TK,” “TK Reveals the One Brutal Truth About TK,” and “TK Celebrity Just Said TK Thing About TK Issue. Here’s why that’s important.” At one point, according to an early staffer who has since left, news writers had to follow a formula with bolded sections, which ensured their stories didn’t leave readers with any questions: The intro. The problem. The context. The takeaway.

…But the success of Mic.com was due to algorithms built on top of algorithms. Facebook targets which links are visible to users based on complex and opaque rules, so it wasn’t just the character of the 2010s American population that was receptive to Mic.com’s specific brand of SJW outrage clickbait, but Facebook’s rules for which articles to share with which users and when. These rules, in turn, are calibrated to keep users engaged in Facebook as much as possible and provide the largest and most receptive audience for its advertisers, as befits a modern tech giant in a two-sided market.

Professor Bruce Charlton has a post about Head Girl Syndrome–the Opposite of Creative Genius that is good and short enough that I wish I could quote the whole thing. A piece must suffice:

The ideal Head Girl is an all-rounder: performs extremely well in all school subjects and has a very high Grade Point Average. She is excellent at sports, Captaining all the major teams. She is also pretty, popular, sociable and well-behaved.

The Head Girl will probably be a big success in life, in whatever terms being a big success happens to be framed …

But the Head Girl is not, cannot be, a creative genius. …

The more selective the social system, the more it will tend to privilege the Head Girl and eliminate the creative genius.

Committees, peer review processes, voting – anything which requires interpersonal agreement and consensus – will favour the Head Girl and exclude the creative genius.  …


We live in a Head Girl’s world – which is also a world where creative genius is marginalized and disempowered to the point of near-complete invisibility.

The quest for social status is, I suspect, one of the things driving the system. Status-oriented people refuse to accept information that comes from people lower status than themselves, which renders system feedback even more difficult. The internet as a medium of information sharing is beautiful; the internet as a medium of status signalling is horrible.

So what do you think? Do sufficiently large organization start acting like malevolent (or hostile) AIs?

(Back to Part 1)


Do Sufficiently Large Organizations Start Acting Like Malevolent AIs? (pt 1)

(and Society is an Extremely Large Organization)

What do I mean by malevolent AI?

AI typically refers to any kind of intelligence or ability to learn possessed by machines. Malevolent AI occurs when a machine pursues its programmed objectives in a way that humans find horrifying or immoral. For example, a machine programmed to make paperclips might decide that the easiest way to maximize paperclip production is to enslave humans to make paperclips for it. Superintelligent AI is AI that has figured out how to make itself smarter and thus keeps getting smarter and smarter. (Should we develop malevolent superintelligent AI, then we’ll really have something to worry about.)

Note: people who actually study AI probably have better definitions than I do.

While we like to think of ourselves (humans) as unique, thinking individuals, it’s clear that many of our ideas come from other people. Chances are good you didn’t think up washing your hands or brushing your teeth by yourself, but learned about them from your parents. Society puts quite a bit of effort, collectively speaking, into teaching children all of the things people have learned over the centuries–from heliocentrism to the fact that bleeding patients generally makes diseases worse, not better.

Just as we cannot understand the behavior of ants or bees simply by examining the anatomy of a single ant or single bee, but must look at the collective life of the entire colony/hive, so we cannot understand human behavior by merely examining a single human, but must look at the collective nature of human societies. “Man is a political animal,” whereby Aristotle did not mean that we are inherently inclined to fight over transgender bathrooms, but instinctively social:

Hence it is evident that the state is a creation of nature, and that man is by nature a political animal. And he who by nature and not by mere accident is without a state, is either above humanity, or below it; he is the ‘Tribeless, lawless, hearthless one,’ whom Homer denounces—the outcast who is a lover of war; he may be compared to a bird which flies alone.

Now the reason why man is more of a political animal than bees or any other gregarious animals is evident. Nature, as we often say, makes nothing in vain, and man is the only animal whom she has endowed with the gift of speech. And whereas mere sound is but an indication of pleasure or pain, and is therefore found in other animals (for their nature attains to the perception of pleasure and pain and the intimation of them to one another, and no further), the power of speech is intended to set forth the expedient and inexpedient, and likewise the just and the unjust. And it is a characteristic of man that he alone has any sense of good and evil, of just and unjust, and the association of living beings who have this sense makes a family and a state. –Aristotle, Politics

With very rare exceptions, humans–all humans, in all parts of the world–live in groups. Tribes. Families. Cities. Nations. Our nearest primate relatives, chimps and bonobos, also live in groups. Primates are social, and their behavior can only be understood in the context of their groups.

Groups of humans are able to operate in ways that individual humans cannot, drawing on the collective memories, skills, and knowledge of their members to create effects much greater than what could be achieved by each person acting alone. For example, one lone hunter might be able to kill a deer–or if he is extremely skilled, hardworking, and lucky, a dozen deer–but ten hunters working together can drive an entire herd of deer over a cliff, killing hundreds or even thousands. (You may balk at the idea, but many traditional hunting societies were dependent on only a few major hunts of migrating animals to provide the majority of their food for the entire year–meaning that those few hunts had to involve very high numbers of kills or else the entire tribe would starve while waiting for the animals to return.)

Chimps have never, to my knowledge, driven megafauna to extinction–but humans have a habit of doing so wherever they go. Humans are great at what we do, even if we aren’t always great at extrapolating long-term trends.

But the beneficial effects of human cooperation don’t necessarily continue to increase as groups grow larger–China’s 1.3 billion people don’t appear to have better lives than Iceland’s 332,000 people. Indeed, there probably is some optimal size–depending on activity and available communications technology–beyond which the group struggles to coordinate effectively and begins to degenerate.

CBS advises us to make groups of 7:

As it turns out, seven is a great number for not only forming an effective fictional fighting force, but also for task groups that use spreadsheets instead of swords to do their work.

That’s according to the new book Decide & Deliver: 5 Steps to Breakthrough Performance in Your Organization (Harvard Business Press).

Once you’ve got 7 people in a group, each additional member reduces decision effectiveness by 10%, say the authors, Marcia W. Blenko, Michael C. Mankins, and Paul Rogers.

Unsurprisingly, a group of 17 or more rarely makes a decision other than when to take a lunch break.

Princeton blog reports:

The trope that the likelihood of an accurate group decision increases with the abundance of brains involved might not hold up when a collective faces a variety of factors — as often happens in life and nature. Instead, Princeton University researchers report that smaller groups actually tend to make more accurate decisions, while larger assemblies may become excessively focused on only certain pieces of information. …

collective decision-making has rarely been tested under complex, “realistic” circumstances where information comes from multiple sources, the Princeton researchers report in the journal Proceedings of the Royal Society B. In these scenarios, crowd wisdom peaks early then becomes less accurate as more individuals become involved, explained senior author Iain Couzin, a professor of ecology and evolutionary biology. …

The researchers found that the communal ability to pool both pieces of information into a correct, or accurate, decision was highest in a band of five to 20. After that, the accurate decision increasingly eluded the expanding group.

Couzin found that in small groups, people with specialized knowledge could effectively communicate that to the rest of the group, whereas in larger groups, they simply couldn’t convey their knowledge to enough people and group decision-making became dominated by the things everyone knew.

If you could travel back in time and propose the idea of democracy to the inhabitants of 13th century England, they’d respond with incredulity: how could peasants in far-flung corners of the kingdom find out who was running for office? Who would count the votes? How many months would it take to tally up the results, determine who won, and get the news back to the outlying provinces? If you have a printing press, news–and speeches–can quickly and accurately spread across large distances and to large numbers of people, but prior to the press, large-scale democracy simply wasn’t practical.

Likewise, the communism of 1917 probably couldn’t have been enacted in 1776, simply because society at that time didn’t have the technology yet to gather all of the necessary data on crop production, factory output, etc. (As it was, neither did Russia of 1917, but they were closer.)

Today, the amount of information we can gather and share on a daily basis is astounding. I have at my fingertips the world’s greatest collection of human knowledge, an overwhelming torrent of data.

All of our these information networks have linked society together into an increasingly efficient meta-brain–unfortunately, it’s not a very smart meta-brain. Like the participants in Couzin’s experiments, we are limited to what “everyone knows,” stymied in our efforts to impart more specialized knowledge. (I don’t know about you, but I find being shouted down by a legion of angry people who know less about a subject than I do one of the particularly annoying features of the internet.)

For example, there’s been a lot of debate lately about immigration, but how much do any of us really know about immigrants or immigrant communities? How much of this debate is informed by actual knowledge of the people involved, and how much is just people trying to extend vague moral principles to cover novel situations? I recently had a conversation with a progressive acquaintance who justified mass-immigration on the grounds that she has friendly conversations with the cabbies in her city. Heavens protect us–I hope to get along with people as friends and neighbors, not just when I am paying them!

One gets the impression in conversation with Progressives that they regard Christian Conservatives as a real threat, because that group that can throw its weight around in elections or generally enforce cultural norms that liberals don’t like, but are completely oblivious to the immigrants’ beliefs. Most of our immigrants hail from countries that are rather more conservative than the US and definitely more conservative than our liberals.

Any sufficiently intelligent democracy ought to be able to think critically about the political opinions of the new voters it is awarding citizenship to, but we struggle with this. My Progressive acquaintance seems think that we can import an immense, conservative, third-world underclass and it will stay servile indefinitely, not vote its own interests or have any effects on social norms. (Or its interests will be, coincidentally, hers.)

This is largely an information problem–most Americans are familiar with our particular brand of Christian conservatives, but are unfamiliar with Mexican or Islamic ones.

How many Americans have intimate, detailed knowledge of any Islamic society? Very few of us who are not Muslim ourselves speak Arabic, and few Muslim countries are major tourist destinations. Aside from the immigrants themselves, soldiers, oil company employees, and a handful of others have spent time in Islamic countries, but that’s about it–and no one is making any particular effort to listen to their opinions. (It’s a bit sobering to realize that I know more about Islamic culture than 90% of Americans and I still don’t really know anything.)

So instead of making immigration policy based on actual knowledge of the groups involved, people try to extend the moral rules–heuristics–they already have. So people who believe that “religious tolerance is good,” because this rule has generally been useful in preventing conflict between American religious groups, think this rule should include Muslim immigrants. People who believe, “I like being around Christians,” also want to apply their rule. (And some people believe, “Groups are more oppressive when they’re the majority, so I want to re-structure society so we don’t have a majority,” and use that rule to welcome new immigrants.)

And we are really bad at testing whether or not our rules are continuing to be useful in these new situations.


Ironically, as our networks have become more effective, our ability to incorporate new information may have actually gone down.

The difficulties large groups experience trying to coordinate and share information force them to become dominated by procedures–set rules of behavior and operation are necessary for large groups to operate. A group of three people can use ad-hoc consensus and rock-paper-scissors to make decisions; a nation of 320 million requires a complex body of laws and regulations.

But it’s getting late, so let’s continue this discussion in the next post.


Parsis, Travellers, and Human Niches

Irish Travellers, 1954


Why are there many kinds of plants and animals? Why doesn’t the best out-compete, eat, and destroy the others, rising to be the sole dominant species on Earth?

In ecology, a niche is an organism’s specific place within the environment. Some animals eat plants; some eat dung. Some live in the sea; others in trees. Different plants flower and grow in different seasons; some are pollinated by bees and some by flies. Every species has its specific niche.

The Competitive Exclusion Principle (aka Gause’s Law) states that ‘no two species can occupy the same niche’ (or positively, ‘two species coexisting must have different niches.’) For example, if squirrels and chipmunks both want to nest in the treetops and eat nuts, (and there are limited treetops and nuts,) then over time, whichever species is better at finding nuts and controlling the treetops will dominate the niche and the other, less successful species will have to find a new niche.

If squirrels are dominating the treetops and nuts, this leaves plenty of room for rabbits to eat grass and owls to eat squirrels.

II. So I was reading recently about the Parsis and the Travellers. The Parsis, as we discussed on Monday, are Zoroastrians, originally from Persia (modern-day Iran,) who settled in India about a thousand yeas ago. They’re often referred to as the “Jews of India” because they played a similar role in Indian society to that historically played by Jews in Europe.*

*Yes I know there are actual Jews in India.

The Travellers are an Irish group that’s functionally similar to Gypsies, but in fact genetically Irish:

In 2011 an analysis of DNA from 40 Travellers was undertaken at the Royal College of Surgeons in Dublin and the University of Edinburgh. The study provided evidence that Irish Travellers are a distinct Irish ethnic minority, who separated from the settled Irish community at least 1000 years ago; the claim was made that they are as distinct from the settled community as Icelanders are from Norwegians.[36]

It appears that Ireland did not have enough Gypsies of Indian extraction and so had to invent its own.

And though I originally thought that only in jest, why not? Gypsies occupy a particular niche, and if there are Gypsies around, I doubt anyone else is going to out-compete them for that niche. But if there aren’t any, then surely someone else could.

According to Wikipedia, the Travellers traditionally were tinkers, mended tinware (like pots) and acquiring dead/old horses for slaughter.

The Gypsies appear to have been originally itinerant musicians/entertainers, but have also worked as tinkers, smiths, peddlers, miners, and horse traders (today, car salesmen.)

These are not glorious jobs, but they are jobs, and peripatetic people have done them.

Jews (and Parsis, presumably) also filled a social niche, using their network of family/religious ties to other Jews throughout the diaspora as the basis of a high-trust business/trading network at a time when trade was difficult and routes were dangerous.

On the subject of “Madeburg rights” or law in Eastern Europe, Wikipedia notes:

In medieval Poland, Jews were invited along with German merchants to settle in cities as part of the royal city development policy.

Jews and Germans were sometimes competitors in those cities. Jews lived under privileges that they carefully negotiated with the king or emperor. They were not subject to city jurisdiction. These privileges guaranteed that they could maintain communal autonomy, live according to their laws, and be subjected directly to the royal jurisdiction in matters concerning Jews and Christians. One of the provisions granted to Jews was that a Jew could not be made Gewährsmann, that is, he could not be compelled to tell from whom he acquired any object which had been sold or pledged to him and which was found in his possession. Other provisions frequently mentioned were a permission to sell meat to Christians, or employ Christian servants.

External merchants coming into the city were not allowed to trade on their own, but instead forced to sell the goods they had brought into the city to local traders, if any wished to buy them.

Note that this situation is immensely better if you already know the guy you’re selling to inside the city and he’s not inclined to cheat you because you both come from the same small, tight-knit group.


Under Bolesław III (1102–1139), the Jews, encouraged by the tolerant regime of this ruler, settled throughout Poland, including over the border in Lithuanian territory as far as Kiev.[32] Bolesław III recognized the utility of Jews in the development of the commercial interests of his country. … Mieszko III employed Jews in his mint as engravers and technical supervisors, and the coins minted during that period even bear Hebraic markings.[30] … Jews enjoyed undisturbed peace and prosperity in the many principalities into which the country was then divided; they formed the middle class in a country where the general population consisted of landlords (developing into szlachta, the unique Polish nobility) and peasants, and they were instrumental in promoting the commercial interests of the land.

If you need merchants and goldsmiths, someone will become merchants and goldsmiths. If it’s useful for those merchants and goldsmiths to all be part of one small, close-knit group, then a small, close-knit group is likely to step into that niche and out-compete anyone else trying to occupy it.

The similarity of the Parsis to the Jews probably has less to do with them both being monotheists (after all, Christians, Muslims, and Sikhs are also monotheists,) and more to do with them both being small but widely-flung diasporic communities united by a common religion that allows them to use their group as a source of trustworthy business partners.

Over hundreds or thousands of years, humans might not just move into social niches, but actually become adapted to them–Jew and Parsis are both reported to be very smart, for example.

III. I can think of several other cases of ethnic groups moving into a particular niche. In the US, the gambling and bootleg alcohol trade were long dominated by ethnic Sicilians, while the crack and heroin trades have been dominated by black and Hispanic gangs.

Note that, while these activities are (often) illegal, they are still things that people want to do and the mafia/gangs are basically providing a goods/services to their customers. As they see it, they’re just businessmen. They’re out to make money, not commit random violence.

That said, these guys do commit lots of violence, including murder, blackmail and extortion. Even violent crime can be its own niche, if it pays well enough.

(Ironically, police crackdown on ethnic Sicilian control in NYC coincided with a massive increase in crime–did the mafia, by controlling a particular territory, keep out competing bands of criminals?)

On a more pleasant note, society is now rich enough that many people can make a living as professional sports stars, marry other professional sports stars, and have children who go on to also be professional sports stars. It’s not quite at the level of “a caste of professional athletes genetically optimized for particular sports,” but if this kept up for a few hundred years, it could be.

Similarly, over in Nepal, “Sherpa” isn’t just a job, it’s an ethnic group. Sherpas, due to their high elevation adaptation, have an advantage over the rest of us when it comes to scaling Mt. Everest, and I hear the global mountain-climbing industry pays them well for their services. A Sherpa who can successfully scale Mt. Everest many times, make lots of money, and raise lots of children in an otherwise impoverished nation is thus a successful Sherpa–and contributing to the group’s further genetic and cultural specialization in the “climbing Mt. Everest” niche.

India, of course, is the ultimate case of ethnic groups specializing into specific jobs–it’d be interesting to see what adaptations different groups have acquired over the years.

I also wonder if the caste system is an effective way to minimize competition between groups in a multi-ethnic society, or if it leads to more resentment and instability in the long run.


Thermodynamics and Urban Sprawl

Termite Mound

Evolution is just a special case of thermodynamics. Molecules spontaneously arrange themselves to optimally dissipate energy.

Society itself is a thermodynamic system for entropy dissipation. Energy goes in–in the form of food and, recently, fuels like oil–and children and buildings come out.

Government is simply the entire power structure of a region–from the President to your dad, from bandits to your boss. But when people say, “government,” they typically mean the official one written down in laws that lives in white buildings in Washington, DC.


When the “government” makes laws that try to change the natural flow of energy or information through society, society responds by routing around the law, just as water flows around a boulder that falls in a stream.

The ban on trade with Britain and France in the early 1800s, for example, did not actually stop people from trading with Britain and France–trade just became re-routed through smuggling operations. It took a great deal of energy–in the form of navies–to suppress piracy and smuggling in the Gulf and Caribbean–chiefly by executing pirates and imprisoning smugglers.


When the government decided that companies couldn’t use IQ tests in hiring anymore (because IQ tests have a “disparate impact” on minorities because black people tend to score worse, on average, than whites,) in Griggs vs. Duke Power, they didn’t start hiring more black folks. They just started using college degrees as a proxy for intelligence, contributing to the soul-crushing debt and degree inflation young people know and love today.

Similarly, when the government tried to stop companies from asking about applicants’ criminal histories–again, because the results were disproportionately bad for minorities–companies didn’t start hiring more blacks. Since not hiring criminals is important to companies, HR departments turned to the next best metric: race. These laws ironically led to fewer blacks being hired, not more.

Where the government has tried to protect the poor by passing tenant’s rights laws, we actually see the opposite: poorer tenants are harmed. By making it harder to evict tenants, the government makes landlords reluctant to take on high-risk (ie, poor) tenants.

The passage of various anti-discrimination and subsidized housing laws (as well as the repeal of various discriminatory laws throughout the mid-20th century) lead to the growth of urban ghettos, which in turn triggered the crime wave of the 70s, 80s, and 90s.

Crime and urban decay have made inner cities–some of the most valuable real estate in the country–nigh unlivable, resulting in the “flight” of millions of residents and the collective loss of millions of dollars due to plummeting home values.

Work-arounds are not cheap. They are less efficient–and thus more expensive–than the previous, banned system.

Urban sprawl driven by white flight

Smuggled goods cost more than legally traded goods due to the personal risks smugglers must take. If companies can’t tell who is and isn’t a criminal, the cost of avoiding criminals becomes turning down good employees just because they happen to be black. If companies can’t directly test intelligence, the cost becomes a massive increase in the amount of money being spent on accreditation and devaluation of the signaling power of a degree.

We have dug up literally billions of dollars worth of concentrated sunlight in the form of fossil fuels in order to rebuild our nation’s infrastructure in order to work around the criminal blights in the centers of our cities, condemning workers to hour-long commutes and paying inflated prices for homes in neighborhoods with “good schools.”

Note: this is not an argument against laws. Some laws increase efficiency. Some laws make life better.

This is a reminder that everything is subject to thermodynamics. Nothing is free.


Piracy and Emergent Order: Peter Leeson’s An-arrgh-chy and the Invisible Hook

Buccaneer of the Caribbean, from Howard Pyle’s Book of Pirates

After our long trek through Siberia, I wanted to change things up and do something rather different for Anthropology Friday, so today we’re reading Peter Leeson’s work on pirates. Strictly speaking, it isn’t quite “anthropology” because Leeson didn’t go live with pirates, but I’m willing to overlook that.

The Golden Age of piracy only lasted from 1690 through 1730, but in those days they were a serious menace to ships and men alike on the high seas. In A General History of the Robberies and Murders of the most notorious Pyrates, (1724,) Captain Charles Johnson complained:

“This was at a Time that the Pyrates had obtained such an Acquisition of Strength, that they were in no Concern about preserving themselves from the Justice of Laws”

Pirates stalked the ocean’s major trade routes, particularly between the Bahamas, Caribbean islands, Madagascar, and the North American coast. Over a century after Captain Johnson, Melville recounted the pirates of Malaysia and Indonesia:

The long and narrow peninsula of Malacca, extending south-eastward from the territories of Birmah, forms the most southerly point of all Asia. In a continuous line from that peninsula stretch the long islands of Sumatra, Java, Bally, and Timor … By the straits of Sunda, chiefly, vessels bound to China from the west, emerge into the China seas.

Those narrow straits of Sunda divide Sumatra from Java; and standing midway in that vast rampart of islands, buttressed by that bold green promontory, known to seamen as Java Head; they not a little correspond to the central gateway opening into some vast walled empire: and considering the inexhaustible wealth of spices, and silks, and jewels, and gold, and ivory, with which the thousand islands of that oriental sea are enriched, it seems a significant provision of nature, that such treasures, by the very formation of the land, should at least bear the appearance, however ineffectual, of being guarded from the all-grasping western world. ..

Time out of mind the piratical proas of the Malays, lurking among the low shaded coves and islets of Sumatra, have sallied out upon the vessels sailing through the straits, fiercely demanding tribute at the point of their spears. Though by the repeated bloody chastisements they have received at the hands of European cruisers, the audacity of these corsairs has of late been somewhat repressed; yet, even at the present day, we occasionally hear of English and American vessels, which, in those waters, have been remorselessly boarded and pillaged. …

And who could tell whether, in that congregated caravan, Moby Dick himself might not temporarily be swimming, like the worshipped white-elephant in the coronation procession of the Siamese! So with stun-sail piled on stun-sail, we sailed along, driving these leviathans before us; when, of a sudden, the voice of Tashtego was heard, loudly directing attention to something in our wake. …

It seemed formed of detached white vapours, rising and falling something like the spouts of the whales; only they did not so completely come and go; for they constantly hovered, without finally disappearing. Levelling his glass at this sight, Ahab quickly revolved in his pivot-hole, crying, “Aloft there, and rig whips and buckets to wet the sails;—Malays, sir, and after us!”

Leeson distinguishes between different sorts of pirates; for the rest of this article we will not be dealing with Malay, Somali, or Barbary pirates, but only the Atlantic-dwelling species. These pirates enlisted for the long haul and lived for months at sea, forming veritable floating societies. Modern Somali pirates, by contrast, live ashore, hop in their boats when they spot a victim, rob and murder, then head back to shore–they form no comparable sea-borne society.

One of the most fascinating aspects of pirate life–leaving aside faulty romantic notions of plunder and murder–is that even these anarchists of the sea instituted social organization among themselves.

Marooned, by Howard Pyle

Pirates had contracts, complete with clauses detailing the division of loot, compensation for different injuries sustained on the job, division of power between the Captain and the Quarter-Master, and election of the captain.

Yes, pirates elected their captains, and if they did not like their captain’s performance, they could un-elect him. According to Leeson:

The historical record contains numerous examples of pirate crews deposing unwanted captains by majority vote or otherwise removing them from power through popular consensus. Captain Charles Vane’s pirate crew, for example, popularly deposed him for cowardice: “the Captain’s Behavior was obliged to stand the Test of a Vote, and a Resolution passed against his Honour and Dignity . . . deposing him from the Command”

In The Invisible Hook: The Law and Economics of Pirate Tolerance, Leeson provides us with a typical contract, used by pirate captain Edward Low’s crew around 1723:

1. The Captain is to have two full Shares; the Master is to have one Share and one half; The Doctor, Mate, Gunner[,] and Boatswain, one Share and one Quarter [and everyone
else to have one share]. …
3. He that shall be found Guilty of Cowardice in the time of Ingagement, shall suffer what Punishment the Captain and Majority of the Company shall think fit.
4. If any Gold, Jewels, Silver, &c. be found on Board of any Prize or Prizes to the value of a Piece of Eight, & the finder do not deliver it to the Quarter Master in the space of 24
hours shall suffer what punishment the Captain and Majority of the Company shall think fit. …
6. He that shall have the Misfortune to lose a Limb in time of Engagement, shall have the Sum of Six hundred pieces of Eight, and remain aboard as long as he shall think fit. …
8. He that sees a sail first, shall have the best Pistol or Small Arm aboard of her.
9. He that shall be guilty of Drunkenness in time of Engagement shall suffer what Punishment the Captain and Majority of the Company shall think fit. …

Why did pirates go to the bother of writing contracts–or should we say, constitutions–for the running of their ships? In An-arrgh-chy: The Law and Economics of Pirate Organization, Leeson compares conditions aboard pirate ships to those aboard regular merchant vessels of the same day.

Merchant vessels were typically owned by corporations, such as the Dutch East India Company. Wealthy land-lubbers bought shares in these companies, which entitled them to a share of the boat’s profits when it returned to port. But these land-lubbers had no intention of actually getting on the boats–not only did they lack the requisite nautical knowledge, but ocean voyages were extremely dangerous. For example, 252 out of 270 sailors in Ferdinand Magellan’s crew died during their circumnavigation of the globe (1519 through 1522.) Imagine signing up for a job with a 93% death rate!

The owners, therefore, hired a captain, whose job–like a modern CEO–was to ensure that the ship returned with as high profits for its owners as possible.

The captain of a merchant ship was an autocrat with absolute control, including the power to dole out corporal punishment to his crew.

Ships through the ages: Pirate dhow; Spanish or Venetian galley; Spanish galleon
The dhow “is a typical 16th century dhow, a grab-built, lateen-rigged vessel of Arabia, the Mediterranean, and the Indian Ocean. It has the usual long overhang forward, high poop deck and open waist. The dhow was notorious in the slave trade on the east coast of Africa, and even after a thousand years is still one of the swiftest of sailing crafts.”

For all their pains, sailors were paid pitifully little: “Between 1689 and 1740 [pay]
varied from 25 to 55 shillings per month, a meager £15 to £33 per year.” By contrast, “Even the small pirate crew captained by John Evans in 1722 took enough booty to split
“nine thousand Pounds among thirty Persons”—or £300 a pirate—in less than six months “on the account”.”

The captain’s absolute power over his crew was not due to offering good wages, pleasant working conditions, or even a decent chance of not dying, but because he had the power of the state behind him to enforce his authority and punish anyone who mutinied against him.

Pirate captains, by contrast, were neither responsible to stockholders nor had the power of the state to enforce their authority. They had only–literally–the consent of their governed: the other pirates on board.

Why have a captain at all?

A small group–a maximum of 10 or 15 people, perhaps–can easily discuss and negotiate everything they want to do. For a larger group to achieve its aims requires some form of coherent, established organization. It would be inefficient–and probably deadly–for multiple pirates to start shouting conflicting orders in the middle of battle. It would be inefficient–and probably deadly–for a pirate crew to argue over the proper division of loot after it was captured.

The average pirate crew–calculated by Leeson–had 80 people, well within Dunbar’s Number, the theoretical “cognitive limit to the number of people with whom one can maintain stable social relationships—relationships in which an individual knows who each person is and how each person relates to every other person.[1][2]” The Dunbar Number is generally believed to be around 10o-150.

But how does emergent order emerge? What incentivizes each pirate to put aside their own personal desire to be captain and vote for someone else?

In Is Deference the Price of Being Seen as Reasonable? How Status Hierarchies Incentivize Acceptance of Low Status, Ridgeway and Nakagawa write (h/t Evolving_Moloch):

How, then, do collective, roughly consensual status hierarchies so regularly emerge among goal-interdependent people? While individuals have an enlightened self-interest in deferring to others on the basis of their apparent ability and willingness to contribute to the task effort, these same individuals also have a much more egoistic self-interest in gaining as much status and influence as they can, regardless. … The key is recognizing that whatever individuals want for themselves, they want others in the group to defer to those expected to best contribute to the collective effort since this will maximize task success and the shared benefits that flow from that. … As a result, group members are likely to form implicit coalitions to pressure others in the group to defer on the basis of performance expectations. … they are likely to be faced by an implicit coalition of other group members who pressure them to defer on that basis. … an interdependence of exchange interests gives rise to group norms that members enforce. … These are the core implicit rules for status that are likely taken-for-granted cultural knowledge…

The baseline respect earned by deference is less than the esteem offered to high-status member. It is respect for knowing one’s place because it views the deferrer as at least understanding what is validly better for achieving the groups goals even if he or she is not personally better. Yet it is still a type of worthiness. It is an acceptance of the low-status member not as an object of scorn but as a worthy member who understands and affirms the groups standards of value…

As such, [the reaction of respect and approval] acts as a positive incentive system for expected deference…

our implicit cultural rules for enacting status hierarchies not only incentivize contributions to the collective goal. they create a general, if modest, incentive to defer to those for whom the group has higher performance expectations–an incentive we characterize as the dignity of being deemed reasonable.

While any group above 10 or 15 people will have some communication complications, so long as it is still below the Dunbar Number, it should be able to work out its own, beneficial organization: order is a spontaneous, natural feature of human communities. Without this ability, pirate ships would not be able to function–they would devolve into back-stabbing anarchy. As Leeson notes:

The evidence also suggests that piratical articles were successful in preventing internal conflict and creating order aboard pirate ships. Pirates, it appears, strictly adhered to their articles. According to one historian, pirates were more orderly, peaceful, and well organized among themselves than many of the colonies, merchant ships, or vessels of the Royal Navy (Pringle 1953; Rogozinski 2000). As an astonished pirate observer put it, “At sea, they per form their duties with a great deal of order, better even than on the Ships of the Dutch East India Company; the pirates take a great deal of pride in doing things right”…

“great robbers as they are to all besides, [pirates] are precisely just among themselves; without which they could no more Subsist than a Structure without a Foundation” …

Beyond the Dunbar Number, however, people must deal with strangers–people who are not part of their personal status-conferring coalition. Large societies require some form of top-down management in order to function.

Based on the legend of Henri Caesar–see also the story of Florida’s Black Caesar

Let’s let Leeson have the final quote:

Pirates were a diverse lot. A sample of 700 pirates active in the Caribbean between 1715 and 1725, for example, reveals that 35 percent were English, 25 percent were American, 20 percent were West Indian,
10 percent were Scottish, 8 percent were Welsh, and 2 percent were Swedish, Dutch, French, and Spanish …
Pirate crews were also racially diverse. Based on data available from 23 pirate crews active between 1682 and 1726, the racial composition of ships varied between 13 and 98 percent black. If this sample is representative, 25–30 percent of the average pirate crew was of African descent.

There were, of course, very sensible reasons why a large percent of pirates were black: better a pirate than a slave.

(Personally, while I think pirates are interesting in much the same vein as Genghis Khan, I would still like to note that they were extremely violent criminals who murdered innocent people.)



Anecdotal observations of India, Islam, and the West

Updated values chart!

People seemed to like this Twitter thread, so I thought I would go into some more detail, because trying to compress things into 140 characters means leaving out a lot of detail and nuance. First the original, then the discussion:

Back around 2000-2005, I hung out in some heavily Muslim forums. I learned a few things:
1. Muslims and Indians do not get along. At all. Hoo boy. There are a few people who try to rise above the fray, but there’s a lot of hate. (and yes there are historical reasons for this, people aren’t just random.)
2. I didn’t get to know that many Muslims very well, but among those that I did, the nicest were from Iran and Pakistan, the nastiest from Britain. (I wasn’t that impressed by the Saudis.)
3. Muslims and Westerners think differently about “responsibility” for sin. Very frequent, heated debate on the forum. Westerners put responsibility to not sin on the sinner. Hence we imprison [certain] criminals. Islam puts responsibility on people not to tempt others.
Most obvious example is bikinis vs burkas. Westerners expect men to control their impulse to have sex; Muslims expect women not to tempt men. To the Westerner it is obvious that men should display self control, while to the Muslim it is obvious that women should not tempt men. (Don’t display what you aren’t selling.)
Likewise w/ free speech vs. offense. Westerners expect people to control their feelings over things like Piss Christ or Mohammad cartoons. Islam blames people for offending/hurting other people’s feelings; the onus for non-offense is on the speaker, not the hearer.

Obviously this is simplified and exceptions exist, but it’s a pretty fundamental difference in how people approach social problems.

Detailed version:

Back in my early days upon the internet, I discovered that you can join forums and talk to people from all over the world. This was pretty exciting and interesting, and I ended up talking people from places like India, China, Israel, Pakistan, Iran, etc. It was here that I began really understanding that other countries have their own internal and external politics that often have nothing at all to do with the US or what the US thinks or wants.

1. The rivalry between India and Pakistan was one such surprise. Sure, if you’ve ever picked up a book on the recent history of India or Pakistan or even read the relevant Wikipedia pages, you probably know all of this, but as an American whose main exposure to sub-continental culture was samosas and music, the vitriolic hate between the two groups was completely unexpected.

Some background, from the Wikipedia:

Since the partition of India in 1947 and creation of modern States of India and Pakistan, the two South Asian countries have been involved in four wars, including one undeclared war, and many border skirmishes and military stand-offs.

The Kashmir issue has been the main cause, whether direct or indirect, of all major conflicts between the two countries with the exception of the Indo-Pakistani War of 1971 where conflict originated due to turmoil in erstwhile East Pakistan (now Bangladesh). …

As the Hindu and Muslim populations were scattered unevenly in the whole country, the partition of British India into India and Pakistan in 1947 was not possible along religious lines. Nearly one third of the Muslim population of British India remained in India.[3] Inter-communal violence between Hindus, Sikhs and Muslims resulted in between 500,000 and 1 million casualties.[1]

Following Operation Searchlight and the 1971 Bangladesh atrocities, about 10 million Bengalis in East Pakistan took refuge in neighbouring India.[22] India intervened in the ongoing Bangladesh liberation movement.[23][24] After a large scale pre-emptive strike by Pakistan, full-scale hostilities between the two countries commenced. …

This war saw the highest number of casualties in any of the India-Pakistan conflicts, as well as the largest number of prisoners of war since the Second World War after the surrender of more than 90,000 Pakistani military and civilians.[29] In the words of one Pakistani author, “Pakistan lost half its navy, a quarter of its air force and a third of its army”.[30]

Please note that India and Pakistan both HAVE NUKES.

Some people are also still angry about the Muslim conquest of India:

Muslim conquests on the Indian subcontinent mainly took place from the 12th to the 16th centuries, though earlier Muslim conquests made limited inroads into modern Afghanistan and Pakistan as early as the time of the Rajput kingdoms in the 8th century. With the establishment of the Delhi Sultanate, Islam spread across large parts of the subcontinent. In 1204, Bakhtiar Khilji led the Muslim conquest of Bengal, marking the eastern-most expansion of Islam at the time.

Prior to the rise of the Maratha Empire, which was followed by the conquest of India by the British East India Company, the Muslim Mughal Empire was able to annex or subjugate most of India’s kings. However, it was never able to conquer the kingdoms in upper reaches of the Himalayas such as the regions of today’s Himachal Pradesh, Uttarakhand, Sikkim, Nepal and Bhutan; the extreme south of India, such as Travancore and Tamil Nadu; and in the east, such as the Ahom kingdom in Assam.

I don’t know if any disinterested person has ever totaled up the millions of deaths from invasions and counter-invasions, (you can start by reading Persecution of Hindus and Persecution of Buddhists on Wikipedia, or here on Sikhnet, though I can’t say if these are accurate articles,) but war is a nasty, violent thing that involves lots of people dying. My impression is that Islam has historically been more favorable to Judaism and Christianity than to Hinduism because Christians, Jews, and Muslims are all monotheists whose faiths descend from a common origin, whereas Hindus are pagans, which is just right out.

Anyway, I am not trying to give a complete and accurate history of the subcontinent, which is WAY TOO LONG for a paltry blog post. I am sure people on both sides could write very convincing and well-reasoned posts arguing that their side is the good and moral side and that the other side is the one that committed all of the atrocities.

I am just trying to give an impression of the conflict people are arguing about.

Oh, hey, did you know Gandhi was murdered by a Hindu nationalist in a conflict over Pakistan?

Gandhi’s vision of an independent India based on religious pluralism, however, was challenged in the early 1940s by a new Muslim nationalism which was demanding a separate Muslim homeland carved out of India.[9] Eventually, in August 1947, Britain granted independence, but the British Indian Empire[9] was partitioned into two dominions, a Hindu-majority India and Muslim-majority Pakistan.[10] As many displaced Hindus, Muslims, and Sikhs made their way to their new lands, religious violence broke out, especially in the Punjab and Bengal. Eschewing the official celebration of independence in Delhi, Gandhi visited the affected areas, attempting to provide solace. In the months following, he undertook several fasts unto death to promote religious harmony. The last of these, undertaken on 12 January 1948 when he was 78,[11] also had the indirect goal of pressuring India to pay out some cash assets owed to Pakistan.[11] Some Indians thought Gandhi was too accommodating.[11][12] Among them was Nathuram Godse, a Hindu nationalist, who assassinated Gandhi on 30 January 1948 by firing three bullets into his chest.[12]

The American habit of seeing everything through the Cold War lens (we sided with Pakistan against India for Cold War Reasons) and reducing everything to narrow Us-Them dynamics is really problematic when dealing with countries/groups with a thousand or so years of history between them. (This is part of what makes the whole “POC” term so terrible. No, non-whites are not a single, homogenous mass unified entirely by white victimization.)

Obviously not all 1 billion or so Hindus and 1 billion or so Muslims in the world are at each other’s throats. Many save their rivalry for the annual India-Pakistan cricket game:

The IndiaPakistan cricket rivalry is one of the most intense sports rivalries in the world.[1][2] An IndiaPakistan cricket match has been estimated to attract up to one billion viewers, according to TV ratings firms and various other reports.[3][4][5] The 2011 World Cup semifinal between the two teams attracted around 988 million television viewers.[6][7][8] Also tickets for the India-Pakistan match in the 2015 World Cup sold out just 12 minutes after they went on sale.

The arch-rival relations between the two nations, resulting from the extensive communal violence and conflict that marked the Partition of British India into India and Pakistan in 1947 and the subsequent Kashmir conflict, laid the foundations for the emergence of an intense sporting rivalry between the two nations who had erstwhile shared a common cricketing heritage. …

At the same time, India-Pakistan cricket matches have also offered opportunities for cricket diplomacy as a means to improve relations between the two countries by allowing heads of state to exchange visits and cricket followers from either country to travel to the other to watch the matches.

(Gotta love the phrase “erstwhile shared a common cricketing heritage.”)

And some Hindus and Muslims are totally chill and even like each other. After all, India and Pakistan are next door to each other and I’m sure there are tons of good business opportunities that enterprising folks would like to take advantage of.

But there’s a lot of anger.

BTW, there’s also a rivalry between India and China, with both sides accusing each other of massive educational cheating.

2. I should note that the people I talked to definitely weren’t a random distribution of Muslims from around the world. When I say “the Muslims” here, I really mean, “the particular Muslims I happened to talk to.” The folks you’re likely to meet on the internet are high class, educated, speak English, and come from areas with good internet connections. So this definitely isn’t a good way to learn what the Average Moe’ in most Muslim countries thinks.

Note: People in countries colonized by Britain (like India and Pakistan) tend to speak English because it’s taught as a second language in their schools, while people in Indonesia (the world’s biggest Muslim country) probably learn Dutch (they were colonized by the Dutch) and folks in Morocco learn French. The nicest Muslims I met were from Iran and Pakistan and the least pleasant were from Europe. (The Saudis were the kind of folks who would sweetly explain why you needed to die.)

Why? Aside from the vicissitudes of colonial languages and population size, Iran and Pakistan are both countries with plenty of culture, history, and highly-educated people. The Persian Empire was quite an historical force, and the ruins of some of the world’s oldest cities (from the Indus-Valley culture) are in Pakistan (the Indians would like me to note that many of these ruins are also in India and that Indians claim direct cultural descent from the IVC and Pakistanis do not.) Some of the Iranians I met were actually atheists, which is not such a great thing to be in Iran.

Pakistan, IMO, has been on a long, slow, decline from a country with a hopeful future to one with a much dimmer future. Smart, highly-educated Pakistanis are jumping ship in droves. I can’t blame them (I’d leave, too,) but this leaves behind a nation populated with the less-capable, less-educated, and less-pro-West. (Iran probably has less of a problem with brain-drain.)

Many of the other Muslim countries are smaller, don’t speak English, or more recently started down the path to mass literacy, and so don’t stand out particularly in my memories.

The absolute worst person lived in Britain. The only reason he was even allowed to stick around and wasn’t banned for being a total asshole was that one of the female posters had a crush on him and the rest of us played nice for her sake, a sentence I am greatly shamed to write. I’ve never met a Muslim from an actual Muslim country as rude as this guy, who posted endless vitriol about how much he hated Amerikkka for its racism against blacks, Muslims, and other POCs.

Theory: Muslims in predominantly Muslim countries have no particular reason to care what white males are up to in other countries, but Muslims in Britain do, and SJW ideology provides a political victimology framework for what would otherwise be seen as normal competition between people or the difficulties of living in a foreign culture.

3. Aside from the issue of white men, this was before the days of the Muslim-SJW alliance, so there were lots of vigorous, entertaining debates on subjects like abortion, women’s rights, homosexuality, blasphemy, etc. By “debate” I mean “people expressed a variety of views;” there was obviously no one, single viewpoint on either side, but there were definitely consistent patterns and particular views expressed most of the time.

Muslims tend to believe that people have obligations to their families and societies. I have read some lovely tributes to family members from Muslims. I have also been surprised to discover that people whom I regarded as very similar to myself still believed in arranged marriage, that unmarried adult children should live with their parents and grandparents to help them out, etc. These are often behavioral expectations that people don’t even think to mention because they are so common, but very different from our expectation that a child at the age of 18 will move out and begin supporting themselves, and that an adult child who moves in with their parents is essentially a “failure.”

The American notion of libertarianism, that the individual is not obligated at all to their family and society, or that society should not enforce certain behavior standards, but everyone should pursue their own individual self-interest, is highly alien throughout much of the world. (I don’t think it’s even that common in Europe.) Americans tend to see people as individuals, personally responsible for their own actions, whereas Muslims tend to think the state should enforce certain standards of behavior.

This leads to different thoughts about sin, or at least certain kinds of sin. For example, in the case of sexual assault/rape, Westerners generally believe that men are morally obligated to control their impulses toward women, no matter what those women are wearing. There are exceptions, but in general, women expect to walk around wearing bikinis in Western society without being randomly raped, and if you raped some random ladies on the beach just “because they were wearing bikinis,” you’d get in big trouble. We (sort of) acknowledge that men find women in bikinis attractive and that they might even want to have sex with them, but we still place the onus of controlling their behavior on the men.

By contrast, Muslims tend to place the onus for preventing rape on the women. Logically, if women are doing something they know arouses men, then they shouldn’t do it if they don’t don’t want the men to be aroused; don’t display what you aren’t selling. The responsibility isn’t on the men to control their behavior, but on the women to not attract male attention. This is why you will find more burkas than bikinis in Afghanistan, and virtually no burkas anywhere outside of the Muslim world.

If you don’t believe me, here are some articles:

Dutch Woman jailed in Qatar after Reporting Rape, Convicted of “Illicit Sex”

According to Brian Lokollo, a lawyer who was hired by the woman’s family, Laura was at a hotel bar having drinks with a friend in the Qatari capital, but then had a drink that made her feel “very unwell.”
She reportedly woke up in an unfamiliar location and realized “to her great horror” that she had been raped after her drink was spiked, Lokollo said.
When she reported the rape to the police, she herself was imprisoned. …
No mention was made of the rape accusation during proceedings. Neither defendant was present in court, in what was the third hearing in the case. …
At a court hearing in Doha Monday, the 22-year old, whom CNN has identified only as Laura, was handed a one-year suspended sentence and placed on probation for three years for the sex-related charge, and fined 3,000 Qatari Riyals ($823) for being drunk outside a licensed location.

A British tourist has been arrested in Dubai on charges of extramarital sex after telling police a group of British nationals raped her in the United Arab Emirates, according to a UK-based legal advice group called Detained in Dubai.

“This is tremendously disturbing,” Radha Stirling, the group’s founder and director, said in a statement. “Police regularly fail to differentiate between consensual intercourse and violent rape.

Stoning of Aisha Ibrahim Duhulow:

The stoning of Aisha Ibrahim Duhulow was a public execution carried out by the Al-Shabaab militant group on October 27, 2008 in the southern port town of Kismayo, Somalia. Initial reports stated that the victim, Aisha Ibrahim Duhulow, was a 23-year-old woman found guilty of adultery. However, Duhulow’s father and aunt stated that she was 13 years old, under the age of marriage eligibility, and that she was arrested and stoned to death after trying to report that she had been raped. The execution took place in a public stadium attended by about 1,000 bystanders, several of whom attempted to intervene but were shot by the militants.[1][2][3]

There’s a similar dynamic at work with Free Speech/religious freedom issues. The average Christian westerner certainly isn’t happy about things like Piss Christ or Jesus dildos, yet such things are allowed to exist, there is definitely a long history of legal precedent on the subject of heretical and morally offensive works of “art,” and last time I checked, no one got shot for smearing elephant dung on a picture of the Virgin Mary. The general legal standard in the West is that it doesn’t really matter if speech hurts your feelings, it’s still protected. (Here I would cite the essential dignity of the self in being allowed to express one’s true beliefs, whatever they are, and being allowed to act in accordance with one’s own moral beliefs.) I know there are some arguments about this, especially among SJWs, and some educe cases where particular speech isn’t allowed, but the 1st Amendment hasn’t been repealed yet.

By contrast, Muslims tend to see people as morally responsible for the crime of hurting other people’s feelings, offending them, or leading them away from the true faith (which I assume would result in those people suffering eternal torment in something like the Christian hell.) Yes, I have read very politely worded arguments for why apostates need to be executed for the good of society (because they make life worse for everyone else by making society less homogenous.) I’ve also known atheists who lived in Muslim countries who obviously did not think they should be executed.

Basically, Westerners think individuals should strive to be ethical and so make society ethical, while Muslims believe that society should enforce ethicality, top-down, on society. (Both groups, of course, punish people for crimes like theft.)

The idea of an SJW-Muslim alliance is absurd–the two groups deeply disagree on almost every single issue, except their short-term mutual interest in changing the power structure.