I’ve had two requests recently, one for opinions about the spiteful mutant hypothesis, the other about Kaczynski’s work. These are related concepts, of course, in that they both deal with the structure of society.
It took me a little while to realize that the “spiteful mutant” hypothesis has to do with bioleninism, not just studies about mice (clearly I haven’t been reading the right things). Bioleninism is the idea that elites may prefer to hire/promote unqualified people because these people will then be more loyal to the regime because they know they couldn’t get as good a job elsewhere.
The problem with this theory is that the “unqualified” people being hired by the elites are not grateful or loyal at all. If anything, they are resentful, malicious, petty, and greedy, ready to tear down everyone who “gave them a chance,” especially if they think they can promote themselves at the same time.
The difficulty with the mouse models is that they deal with autistic-model mice and their effects on the social structure of a normal mouse colony, but in real life, the people throwing wrenches into society are not autistic–if anything, they are hyper-social. (Or as Ted would say, they are oversocialized.)
Humans, obviously, are not mice. Humans build things. (Mice build things, too, but much less than humans.) We build lots of things, especially those of us in the modern, industrialized world.
I’m sure I’ve harped this over and over, but I still find the modern, industrial world amazing (and slightly disconcerting). I am amazed that our homeless are fat, that ordinary people have toilets, that infant mortality is below 1%. On the scale of human history, we as a species have changed almost everything about our lives in the blink of an eye, and we have yet to see how all of this works out for us. Certainly we are not adapted to it, but that doesn’t necessarily make it bad.
Building our modern world has required the development of new mental skills that our ancestors didn’t possess, like reading, writing, and arithmetic. 10,000 years ago, before the invention of the alphabet, these skills didn’t exist; today almost everyone has mastered them.
Language, both oral and written, requires the ability to generalize. Take something as simple as the letter “A”. It comes in three standard forms, A, a, and the little a used in handwriting. It also comes in many different fonts, in sloppy and neat handwriting, in cursive and smudged ink. You can read the letter “a” even when part of the letter is missing.
Even worse than the variability in the mere shape of the letter, “A” does not consistently refer to a particular sound. It sounds like it is supposed to in “apple,” but sounds like a U in “was.”
Now multiply by the whole alphabet and all of the different voices and accents, noisy rooms and distorted audio, and it’s a wonder that we can understand each other at all, much less read sentences like, “Rpie rsapebrreis are delciiuous in the smmuertmie,” and “Gosts luv cookies b_t candy iz b_ttr.”
Our understanding of language relies on a lot of processing to fill in the gaps between what we hear/see and what was meant. A similar effect is at play with optical illusions.

Both of these faces, for example, are red. Your brain takes the raw data from your eyes, does some processing and color-correction to account for the other colors in the image, and ends up concluding that one screamer is actually orange.
Our brains do this because real life has shifting patterns of color and shade, and our brains are trying to figure out the “real” color if you remove those effects.
One of the interesting things about autistic people is that they are less likely to “see” optical illusions. This might turn out to be one of those amusing psychological findings that doesn’t replicate, but assuming it’s sound, it seems to be because their brains do less processing of the raw data the receive. This means they see the world more as it actually is and less as they think it should be.
The advantage to seeing the world as it actually is and not as you want it to be obviously lies in professions that autists or semi-aspie people excel in, like math and engineering. Unlike reading, you can’t just go filling in missing data in mathematical equations. Lewis Carol could write poems by stringing together things that sound like words, but you can’t build a circuit by wiring together a bunch of capacitors into something that looks generally like the idea of a circuit. An equation that is missing a digit isn’t solvable, a measurement with a misplaced decimal is useless (and potentially deadly), and a misplaced image tag in a post’s code once completely messed up my blog’s layout.
If reading and talking require being good at adding information until you get the general gist of what is meant, math and engineering require carefully not adding information. Humans aren’t very good at this, because it’s a very new skill.
Modern industrial civilization is only possible because of precision engineering. You cannot fit a billion transistors on a microchip without precision. You cannot send communication and navigation satellites into space without building complicated rockets that have to not explode on the launchpad (a surprisingly difficult task) and then precisely calculating their trajectories (otherwise they will veer off disastrously. These computations are complex enough that they tax the limits of human abilities–as Drozdov et al wrote in Fundamentals of Computer Technology in 1964:
Assume that we are to determine the trajectory of a guided space rocket. For this purpose we must calculate the points of the trajectory lying far ahead in the direction of motion of the rocket; only in that case can we estimate the deviation of the rocket from the prescribed direction and apply the necessary midcourse corrections. Such a calculation can be made only by an electronic computer, since workers would require tens of days or several months to calculate a single trajectory, while a rocket takes only three days to reach the moon. The computer will calculate the trajectory in minutes or tens of minutes.
Computers allow us to be more precise, much faster.)
Back in the 1700s, sailors faced a daunting problem: they had no reliable way to measure longitude while at sea. In 1707, inability to determine their position led to four British warships crashing and sinking, causing the deaths of over a thousand sailors. The British Parliament subsequently offered a reward of 20,000 pounds (that would be about 3 million pounds today,) to anyone who could solve the problem.
Early attempts focused on old fashioned methods of finding one’s way and telling time: the heavens. The board awarded 3,000 pounds, for example, to the widow of Tobias Mayer for his lunar tables. Just as the shadows cast by the sun or the height of the north star could be used to determine one’s latitude, so, they hoped, could the moon assist with longitude. Unfortuantely, this method is clunky, difficult, and relies too much on being able to see the moon.

John Harrison came up with a radically new solution: a watch. So long as your watch shows Greenwich time, you can compare it to the local time (observable via the sun or stars,) and the difference shows your longitude. Unfortunately, clocks that kept time precisely enough to accurately determine one’s latitude far from land didn’t exist in Harrison’s day: he had to build them himself. The resulting clocks are masterpieces, incredibly accurate for their day:
Harrison began working on his second ‘sea watch’ (H5) while testing was conducted on the first, which Harrison felt was being held hostage by the Board. After three years he had had enough; Harrison felt “extremely ill used by the gentlemen who I might have expected better treatment from” and decided to enlist the aid of King George III. He obtained an audience with the King, who was extremely annoyed with the Board. King George tested the watch No.2 (H5) himself at the palace and after ten weeks of daily observations between May and July in 1772, found it to be accurate to within one third of one second per day. King George then advised Harrison to petition Parliament for the full prize after threatening to appear in person to dress them down. Finally in 1773, when he was 80 years old, Harrison received a monetary award in the amount of £8,750 from Parliament for his achievements, but he never received the official award (which was never awarded to anyone). He was to survive for just three more years. …
Captain James Cook used K1, a copy of H4, on his second and third voyages, having used the lunar distance method on his first voyage.[22] K1 was made by Larcum Kendall, who had been apprenticed to John Jefferys. Cook’s log is full of praise for the watch and the charts of the southern Pacific Ocean he made with its use were remarkably accurate.
Okay, so that was a bit of an aside, but it’s a great story.
Societies with primitive technology have much less need for precision. It is difficult to imagine what a John Harrison would have dedicated his life to in a tribe of nomadic goat herders: they would have no need for second-level time-keeping precision. Primitive people didn’t even need fancy numbers like “12;” our lovely base-10 number system is a recent invention. Primitive people generally got by with tally marks; many had number systems that essentially stopped at 3: 1, 2, 3, lots.
We discussed this back in my review of Numbers and the Making of Us; the author spent much of his childhood among the Piraha of the Amazon, who have almost no words for numbers and no sense of number bigger than three (the first three are instinctual; many animals can count to three).
The author’s parents (missionaries) have actually tried to teach the Piraha to count, but after many years of training they still struggled to grasp concepts like “7” and 1-to-1 correspondence (that is, if I set up a line of 7 cans, can you set up a matching line that also contains 7 cans?)
In short, the advance of technology over the past 200 years has required the development of much higher levels of mental precision than our ancestors used.
Most people, of course, want to be precise sometimes and generalize at other times, and our brains naturally switch between the two modes depending on what we’re doing, but there is obviously a trade-off between being exceptionally good at either variety. (Smart people may have enough brains to do both well, but the rest of us have to pick one side or the other.)
In the “spiteful mutants” experiment, mouse society is inherently social, and the mutants who disrupt it are “autistic” (or what passes for autism in mice). In real life, much of our modern world was built by thing-obsessed people like John Harrison or Bill Gates. Our society isn’t “autistic” by any means, but there is a place for them that wouldn’t exist in other societies. By contrast, the “spiteful mutants” in our society are oversocialized folks like this guy:
No one who was busy trying to get the tolerances on their widget-producing machines down to less than a tenth of a millimetre ever had time to worry about the “liminal space” in which anyone’s identity is made.
In short, these people are trying to make us more social, more hierarchical, more like the original mouse community with all of the mice focused on reading social cues from each other and less like the “mutant” community with its focus on social cues and things.
As for where Ted fits into all of this, well, I suppose blowing people up is pretty spiteful, but he might not have in the first place if people had just left his woods alone.