Further thoughts on spiteful mutants

I’ve had two requests recently, one for opinions about the spiteful mutant hypothesis, the other about Kaczynski’s work. These are related concepts, of course, in that they both deal with the structure of society.

It took me a little while to realize that the “spiteful mutant” hypothesis has to do with bioleninism, not just studies about mice (clearly I haven’t been reading the right things). Bioleninism is the idea that elites may prefer to hire/promote unqualified people because these people will then be more loyal to the regime because they know they couldn’t get as good a job elsewhere.

The problem with this theory is that the “unqualified” people being hired by the elites are not grateful or loyal at all. If anything, they are resentful, malicious, petty, and greedy, ready to tear down everyone who “gave them a chance,” especially if they think they can promote themselves at the same time.

The difficulty with the mouse models is that they deal with autistic-model mice and their effects on the social structure of a normal mouse colony, but in real life, the people throwing wrenches into society are not autistic–if anything, they are hyper-social. (Or as Ted would say, they are oversocialized.)

Humans, obviously, are not mice. Humans build things. (Mice build things, too, but much less than humans.) We build lots of things, especially those of us in the modern, industrialized world.

I’m sure I’ve harped this over and over, but I still find the modern, industrial world amazing (and slightly disconcerting). I am amazed that our homeless are fat, that ordinary people have toilets, that infant mortality is below 1%. On the scale of human history, we as a species have changed almost everything about our lives in the blink of an eye, and we have yet to see how all of this works out for us. Certainly we are not adapted to it, but that doesn’t necessarily make it bad.

Building our modern world has required the development of new mental skills that our ancestors didn’t possess, like reading, writing, and arithmetic. 10,000 years ago, before the invention of the alphabet, these skills didn’t exist; today almost everyone has mastered them.

Language, both oral and written, requires the ability to generalize. Take something as simple as the letter “A”. It comes in three standard forms, A, a, and the little a used in handwriting. It also comes in many different fonts, in sloppy and neat handwriting, in cursive and smudged ink. You can read the letter “a” even when part of the letter is missing.

Even worse than the variability in the mere shape of the letter, “A” does not consistently refer to a particular sound. It sounds like it is supposed to in “apple,” but sounds like a U in “was.”

Now multiply by the whole alphabet and all of the different voices and accents, noisy rooms and distorted audio, and it’s a wonder that we can understand each other at all, much less read sentences like, “Rpie rsapebrreis are delciiuous in the smmuertmie,” and “Gosts luv cookies b_t candy iz b_ttr.”

Our understanding of language relies on a lot of processing to fill in the gaps between what we hear/see and what was meant. A similar effect is at play with optical illusions.

Both of these faces, for example, are red. Your brain takes the raw data from your eyes, does some processing and color-correction to account for the other colors in the image, and ends up concluding that one screamer is actually orange.

Our brains do this because real life has shifting patterns of color and shade, and our brains are trying to figure out the “real” color if you remove those effects.

One of the interesting things about autistic people is that they are less likely to “see” optical illusions. This might turn out to be one of those amusing psychological findings that doesn’t replicate, but assuming it’s sound, it seems to be because their brains do less processing of the raw data the receive. This means they see the world more as it actually is and less as they think it should be.

The advantage to seeing the world as it actually is and not as you want it to be obviously lies in professions that autists or semi-aspie people excel in, like math and engineering. Unlike reading, you can’t just go filling in missing data in mathematical equations. Lewis Carol could write poems by stringing together things that sound like words, but you can’t build a circuit by wiring together a bunch of capacitors into something that looks generally like the idea of a circuit. An equation that is missing a digit isn’t solvable, a measurement with a misplaced decimal is useless (and potentially deadly), and a misplaced image tag in a post’s code once completely messed up my blog’s layout.

If reading and talking require being good at adding information until you get the general gist of what is meant, math and engineering require carefully not adding information. Humans aren’t very good at this, because it’s a very new skill.

Modern industrial civilization is only possible because of precision engineering. You cannot fit a billion transistors on a microchip without precision. You cannot send communication and navigation satellites into space without building complicated rockets that have to not explode on the launchpad (a surprisingly difficult task) and then precisely calculating their trajectories (otherwise they will veer off disastrously. These computations are complex enough that they tax the limits of human abilities–as Drozdov et al wrote in Fundamentals of Computer Technology in 1964:

Assume that we are to determine the trajectory of a guided space rocket. For this purpose we must calculate the points of the trajectory lying far ahead in the direction of motion of the rocket; only in that case can we estimate the deviation of the rocket from the prescribed direction and apply the necessary midcourse corrections. Such a calculation can be made only by an electronic computer, since workers would require tens of days or several months to calculate a single trajectory, while a rocket takes only three days to reach the moon. The computer will calculate the trajectory in minutes or tens of minutes.

Computers allow us to be more precise, much faster.)

Back in the 1700s, sailors faced a daunting problem: they had no reliable way to measure longitude while at sea. In 1707, inability to determine their position led to four British warships crashing and sinking, causing the deaths of over a thousand sailors. The British Parliament subsequently offered a reward of 20,000 pounds (that would be about 3 million pounds today,) to anyone who could solve the problem.

Early attempts focused on old fashioned methods of finding one’s way and telling time: the heavens. The board awarded 3,000 pounds, for example, to the widow of Tobias Mayer for his lunar tables. Just as the shadows cast by the sun or the height of the north star could be used to determine one’s latitude, so, they hoped, could the moon assist with longitude. Unfortuantely, this method is clunky, difficult, and relies too much on being able to see the moon.

The clockwork in Harrison’s H4 watch

John Harrison came up with a radically new solution: a watch. So long as your watch shows Greenwich time, you can compare it to the local time (observable via the sun or stars,) and the difference shows your longitude. Unfortunately, clocks that kept time precisely enough to accurately determine one’s latitude far from land didn’t exist in Harrison’s day: he had to build them himself. The resulting clocks are masterpieces, incredibly accurate for their day:

Harrison began working on his second ‘sea watch’ (H5) while testing was conducted on the first, which Harrison felt was being held hostage by the Board. After three years he had had enough; Harrison felt “extremely ill used by the gentlemen who I might have expected better treatment from” and decided to enlist the aid of King George III. He obtained an audience with the King, who was extremely annoyed with the Board. King George tested the watch No.2 (H5) himself at the palace and after ten weeks of daily observations between May and July in 1772, found it to be accurate to within one third of one second per day. King George then advised Harrison to petition Parliament for the full prize after threatening to appear in person to dress them down. Finally in 1773, when he was 80 years old, Harrison received a monetary award in the amount of £8,750 from Parliament for his achievements, but he never received the official award (which was never awarded to anyone). He was to survive for just three more years. …

Captain James Cook used K1, a copy of H4, on his second and third voyages, having used the lunar distance method on his first voyage.[22] K1 was made by Larcum Kendall, who had been apprenticed to John Jefferys. Cook’s log is full of praise for the watch and the charts of the southern Pacific Ocean he made with its use were remarkably accurate.

Okay, so that was a bit of an aside, but it’s a great story.

Societies with primitive technology have much less need for precision. It is difficult to imagine what a John Harrison would have dedicated his life to in a tribe of nomadic goat herders: they would have no need for second-level time-keeping precision. Primitive people didn’t even need fancy numbers like “12;” our lovely base-10 number system is a recent invention. Primitive people generally got by with tally marks; many had number systems that essentially stopped at 3: 1, 2, 3, lots.

We discussed this back in my review of Numbers and the Making of Us; the author spent much of his childhood among the Piraha of the Amazon, who have almost no words for numbers and no sense of number bigger than three (the first three are instinctual; many animals can count to three).

The author’s parents (missionaries) have actually tried to teach the Piraha to count, but after many years of training they still struggled to grasp concepts like “7” and 1-to-1 correspondence (that is, if I set up a line of 7 cans, can you set up a matching line that also contains 7 cans?)

In short, the advance of technology over the past 200 years has required the development of much higher levels of mental precision than our ancestors used.

Most people, of course, want to be precise sometimes and generalize at other times, and our brains naturally switch between the two modes depending on what we’re doing, but there is obviously a trade-off between being exceptionally good at either variety. (Smart people may have enough brains to do both well, but the rest of us have to pick one side or the other.)

In the “spiteful mutants” experiment, mouse society is inherently social, and the mutants who disrupt it are “autistic” (or what passes for autism in mice). In real life, much of our modern world was built by thing-obsessed people like John Harrison or Bill Gates. Our society isn’t “autistic” by any means, but there is a place for them that wouldn’t exist in other societies. By contrast, the “spiteful mutants” in our society are oversocialized folks like this guy:

No one who was busy trying to get the tolerances on their widget-producing machines down to less than a tenth of a millimetre ever had time to worry about the “liminal space” in which anyone’s identity is made.

In short, these people are trying to make us more social, more hierarchical, more like the original mouse community with all of the mice focused on reading social cues from each other and less like the “mutant” community with its focus on social cues and things.

As for where Ted fits into all of this, well, I suppose blowing people up is pretty spiteful, but he might not have in the first place if people had just left his woods alone.

Spiteful Mutants?

I recently received an inquiry as to my opinion about the “spiteful mutant hypothesis.” After a bit of reading about genetic deletions in rat colonies I realized that the question was probably referring to bioleninism rather than rodents (though both are valid).

Of Mice and Men: Empirical Support for the Population-Based Social Epistasis Amplification Model, by Serraf and Woodley of Menie, is an interesting article. The authors look at a study by  Kalbassi et al., 2017 about social structures in mouse populations. Experimenters raised two groups of mice. One group had mice with normal mouse genes; mice in this group were sensitive to mouse social-cues and formed normal mouse social hierarchies. The other group had mostly normal mice, but also some mice with genetic mutations that made them less sensitive to social cues. In the second group, the mutant mice were not simply excluded while the rest of the mice went on their merry way, but the entire structure of the group changed:

Among the more striking findings are that the genotypically mixed … litters lacked “a structured social hierarchy” (p. 9) and had lower levels of testosterone (in both Nlgn3y/- and Nlgn3y/+mice); additionally, Nlgn3y/+mice from genotypically homogeneous litters showed more interest in “social” as opposed to “non-social cues” (p. 9) than Nlgn3y/+mice from genotypically mixed litters [the latter did not show a preference for one type of cue over the other, “showing an absence of interest for social cues” (p. 9)].

In other words, in litters where all of the mice are social, they can depend on each other to send and receive social cues, so they do, so they form social hierarchies. Somehow,t his makes the (male) mice have a lot of testosterone. In litters where the mice cannot depend on their companions to consistently send and receive social cues, even the genetically normal mice don’t bother as much, social hierarchies fail to form, and the mice have less testosterone.

A “spiteful mutation” in this context is one that imposes costs not only on the carrier, but on those around them. In this case, by changing the social structure and decreasing the testosterone of the other mice.

It’s a good article (and not long); you should read it before going on.

So what is bioleninism? I’ve seen the term kicked around enough to have a vague sense of it, but let’s be a bit more formal–with thanks to Samir Pandilwar for succinctness:

Developed by Spandrell (alias, Bloody Shovel) it takes the basic Leninist model of building a Party to rule the state out of the dregs of society, and shifts this to the realm of biology, wrong-think biology in particular, building the party out of people who are permanent losers within the social order.

I think the term gets used more generally when people notice that people in positions of power (or striving to make themselves more powerful via leftist politics) are particularly unattractive. In this context, these people are the “spiteful mutants” trying to change the social structure to benefit themselves.

We humans, at least in the west, like to think of ourselves as “individuals” but we aren’t really, not completely. As Aristotle wrote, “Man is a political animal;” we are a social species and most of us can’t survive without society at large–perhaps none of us. Virtually all humans live in a tribe or community of some sort, or in the most isolated cases, have at least occasional trade with others for things they cannot produce themselves.

Our species has been social for its entire existence–even our nearest relatives, the other chimps and gorillas, are social animals, living in troops or families.

We talk a lot about “increasing atomization and individualism” in populations that have transitioned from traditional agricultural (or other lifestyles) to the urban, industrial/post-industrial life of the cities, and this is certainly true in a legal sense, but in a practical sense we are becoming less individual.

A man who lives alone in the mountains must do and provide most things for himself; he produces his own food, is warmed by the efforts of his own ax, and drinks water from his own well. Even his trash is his own responsibility. Meanwhile, people in the city depend on others for so many aspects of their lives: their food is shipped in, their hair is cut by strangers, their houses are cleaned by maids, their water comes from a tap, and even their children may be raised by strangers (often by necessity rather than choice). The man in the mountains is more properly an individual, while the man in the city is inextricably bound together with his fellows.

There isn’t anything objectively wrong with any particular piece of this (fine dining is delicious and hauling water is overrated), but I find the collective effect on people who have come to expect to live this way vaguely unnerving. It’s as though they have shed pieces of themselves and outsourced them to others.

Or as Kaczynski put it:

The industrial-technological system may survive or it may break down. If it survives, it MAY eventually achieve a low level of physical and psychological suffering, but only after passing through a long and very painful period of adjustment and only at the cost of permanently reducing human beings and many other living organisms to engineered products and mere cogs in the social machine. Furthermore, if the system survives, the consequences will be inevitable: There is no way of reforming or modifying the system so as to prevent it from depriving people of dignity and autonomy.

(I have not read the whole of his manifesto, but I keep returning to this point, so eventually I should.)

How different are we from the little bees who cannot live on their own, but each have their role in the buzzing hive? (This is where the spiteful mutant hypothesis comes in.) Bees don’t arrange themselves by talking it over and deciding that this bee would be happy visiting flowers and that bee would be happy guarding the hive. It’s all decided beforehand via bee genetics.

How much “free will” do we really have to chose our human social relations, and how much of it is instinctual? Do we chose whom we love and hate, whom we respect and whom we deem idiots? Did we chose who would invent a billion dollar company and who would be homeless?

(We don’t really know how far instincts and biology go, of course.)

Any genes that affect how human societies cohere and the social hierarchies we form would likely produce different results if found in different quantities in different groups, just like the genes in the mouse models. Such genes could predispose us to be more or less social, more or less aggressive, or perhaps to value some other elements in our groups.

One of the most under-discussed changes wrought by the modern era is the massive decrease in infant mortality. Our recent ancestors suffered infant mortality rates between 20 and 40 percent (sometimes higher.) Dead children were once a near-universal misery; today, almost all of them live.

Among the dead, of course, were some quantity of carriers of deleterious mutations, such as those predisposing one to walk off a cliff or to be susceptible to malaria. Today, our mutants survive–sometimes even those suffering extreme malfunctions.

This doesn’t imply that we need high disease levels to weed out bad mutations: the Native Americans had nice, low disease levels prior to contact with European and African peoples, but their societies seem to have been perfectly healthy. This low-disease state was probably the default our ancestors all enjoyed prior to the invention of agriculture and dense, urban living. They probably still had high rates of infant mortality by modern standards (I haven’t been able to find numbers, but our relatives the chimps and bonobos have infant mortality rates around 20-30%.)

That all said, I’m not convinced that all this so-called “autistic” behavior (eg, the mouse models) is bad. Humans who are focused on things instead of social relations have gifted us much of modern technology. Would we give up irascible geniuses like Isaac Newton just to be more hierarchical? The folks implicitly criticized in the “bioleninist” model are far more obsessed with social hierarchies (and their place in them) than I am. I do not want to live like them, constantly analyzing ever social interaction for whether it contains micro-slights or whether someone has properly acknowledged my exact social status (“That’s Doctor X, you sexist cretin.”)

I want to be left in peace.