Note: this is just a theory, developed in reaction to recent conversations.
From Twitter user FinchesofDarwin comes an interesting tale, about a polygynously-married woman in Guiana:
Manwaiko had two wives, and each of these had a family of young children. … Between the two wives and their respective children little kindness seemed to exist. One evening, while the party were squatting on the ground, eating their supper… one of the wives, who with her children had been employed in cutting firewood, discovered, on her return, that the supper for herself and her family was not to be found, having been carried off by some animal through neglect or connivance of her rival. It could hardly be expected that she would sit down quietly without the evening meal for herself and her children… and she accordingly applied to Manwaiko for a share of his allowance, which was ample. He treated her request with contempt… She then commenced a furious torrent of abuse, during which he finished his meal with great composure, until, being irritated at his indifference, she at last told him that he was no “capitan,” no father, and no man. …
Such stormy ebullitions of temper are rare in the Indian families, though, where polygamy is practiced, continual variance and ill-feeling are found.
From The Indian Tribes of Guiana, their Condition and Habits, by Reverend Brett, 1868
As we were discussing Friday, one form of female sociopathy (at least relevant to this conversation) likely involves manipulating or coercing others into providing resources for her children. On Monday we discussed mental illness and its effects on fertility (generally it lowers fertility in men, but depression has little to no effect on women, neuroticism may enhance fertility, and sometimes the sisters of people with mental illnesses have slightly increased fertility, suggesting that low levels of certain traits may be beneficial.)
Here is where I get 100% speculative, and to be frank, I don’t like saying negative things about women (since I am one,) but if men can be sociopaths, then women can, too–and conversely, the majority of men are not sociopaths, and neither are the majority of women.
In the quoted passage, we see two common tropes: First, the evil stepmother, in the form of the wife who let wild animals make off with half of the family’s food. Second, the crazy bitch, who goes on a tirade questioning her husband’s manliness because he has failed to provide food for her children.
In this case, only the first woman is truly sociopathic (she has harmed the other woman and her children,) but we can see how the second’s behavior could easily spill over into unreasonable demands.
Female sociopathy–manipulating men out of their money–only works as an evolutionary strategy in an environment where men themselves vary in their trustworthiness and cannot be easily predicted. If the men in a society can be counted upon to always provide for their offspring, women have no need to try to manipulate them into doing so; if men in a society flat out refuse to do so, then there is no point to trying. Only in a situation where you can affect the amount of resources you get out of a man will there be any point to doing so.
Given the environmental constraints, sociopathic female behavior is likely to increase in reaction to an increase in sociopathic male behavior–that is, when women fear abandonment or an inability to care for their children.
This manipulation has two targets–first, the father of the child, whom the woman wishes to prevent from wandering off and having children with other women, or baring that, from giving them any resources. Second, should this fail, or the male be too violent for women and children to be near, the woman targets a new male to convince him to care for her, her children, and possibly beat the resources out of the old male.
Since children actually do need to eat, and getting enough resources can be tough, society is generally fine with women doing what they need to provide for their families (unlike men doing whatever they need to maximize reproduction, which usually ends with the police informing you that no, you cannot go “Genghis Khan” on Manhattan.)
But at times women really do go overboard, earning the title of “crazy ex.” Here’s part of one woman’s helpful list of why she went crazy:
1. He told me he loved me, then he left me. … I wasn’t going to make it easy for him to leave me. I promised myself I’d fight for my relationship because I loved him and he said he loved me. … 3. If you didn’t know, one of the quickest ways to drive a woman insane is to ignore her. … This was the most severe phase of crazy for me. I was infuriated that not only was I losing my relationship and wasn’t given a reason why, but I was being blatantly ignored by him too! … 4. He told me not to worry about his “friend,” and now he’s dating her.
Back before the invention of birth control, a woman who got dumped like this was most likely pregnant, if not already caring for several children. Abandonment was a big deal, and she had every reason not to just let her partner wander off and start impregnating new chicks.
In our modern world, he made it clear that he didn’t want to be in a relationship anymore and left.
After my ex boyfriend broke up with me I went crazy… After he dumped me for the third time I felt used and devastated. I wanted an explanation and answers. He was a jerk to me. A cruel son of a bitch. I kept begging, calling, and begging. I never got a reply back. This went on for over 3 months. …
This isn’t the only kind of “crazy” I’ve seen around, though.
An acquaintance recently recounted a story about an ex who actually ended up in the mental hospital for suicidal ideation. She listed him as her contact, something he was not exactly keen on, having already told her the relationship was over.
Then there is the phenomenon of people actually claiming to be crazy, often with rather serious disorders that you would not normally think they would want to revealing to others. For example, I have seen several young women claim recently to have Multiple Personality Disorder–a condition that is not in the DSM and so you can no longer get diagnosed with it. Though you can get diagnosed with Disassociative Identity Disorder, this disorder is rare and quite controversial, and I would expect anyone with a real diagnosis to use the real name, just as few schizophrenics claim to have been diagnosed with dementia praecox.
MPD is less of a real disorder and more of a fad spread by movies, TV, and unscrupulous shrinks, though many people who claim to have it are quite genuinely suffering.
(I should emphasize that in most of these cases, the person in question is genuinely suffering.)
Most of these cases–MPD, PTSD, etc–are supposedly triggered by traumatic experiences, such as childhood abuse or spousal abuse. (Oddly, being starved half to death in a POW camp doesn’t seem to trigger MPD.) And yet, despite the severity of these conditions, people I encounter seem to respond positively to these claims of mental illness–if anything, a claim of mental illness seems to get people more support.
So I suggest a potential mechanism:
First, everyone of course has a pre-set range of responses/behaviors they can reasonably call up, but these ranges vary from person to person. For example, I will run faster if my kids are in danger than if I’m late for an appointment, but you may be faster than me even when you’re just jogging.
Second, an unstable, violent, or neglectful environmental triggers neuroticism, which in turn triggers mental instability.
Third, mental instability attracts helpers, who try to “rescue” the woman from bad circumstances.
Fourth, sometimes this goes completely overboard into trying to destroy an ex, convincing a new partner to harm the ex, spreading untrue rumors about the ex, etc. Alternatively, it goes overboard in the woman become unable to cope with life and needing psychiatric treatment/medication.
Since unstable environments trigger mental instability in the first place, sociopathic men are probably most likely to encounter sociopathic women, which makes the descriptions of female sociopathy automatically sound very questionable:
“My crazy ex told all of our friends I gave her gonorrhea!”
“Yeah, but that was after you stole $5,000 from her and broke two of her ribs.”
This makes it difficult to collect objective information on the matter, and is why this post is very, very speculative.
Note: this is just a theory, developed in reaction to recent conversations.
Let us assume, first of all, that men and women have different optimal reproductive strategies, based on their different anatomy. In case you have not experienced birth yourself, it’s a difference of calories, time, and potential death.
In the ancestral environment (before child support laws, abortion, birth control, or infant formula):
For men, the absolute minimal paternal investment in a child–immediate abandonment–involves a few minutes of effort and spoonful of semen. There are few dangers involved, except for the possibility of other males competing for the same female. A hypothetical man could, with very little strain or extra physical effort, father thousands of children–gay men regularly go through the physical motions of doing just that, and hardly seem exhausted by the effort.
For women, the absolute minimal parental investment is nine months of gestation followed by childbirth. This is calorically expensive, interferes with the mother’s ability to take care of her other children, and could kill her. A woman who tried to maximize her pregnancies from menarchy to menopause might produce 25 children.
If a man abandons his children, there is a decent chance they will still survive, because they can be nursed by their mother; if a woman abandons her child, it is likely to die, because its father cannot lactate and so cannot feed it.
In sum, for men, random procreative acts (ie, sex) are extremely low-cost and still have the potential to produce offspring. For women, random procreative acts are extremely costly. So men have an incentive to spread their sperm around and women have an incentive to be picky about when and with whom they reproduce.
This is well known to, well, everyone.
Now, obviously most men do not abandon their children (nor do most women.) It isn’t in their interest to do so. A man’s children are more likely to survive and do well in life if he invests in them. (In a few societies where paternity is really uncertain, men invest resources in their sisters’ children, who are at least related to them, rather than opting out altogether.) As far as I know, some amount of male input into their children or their sisters’ children is a human universal–the only variation is in how much.
Men want to invest in their children because this helps their children succeed, but a few un-tended bastards here and there are not a major problem. Some of them might even survive.
By contrast, women really don’t want to get saddled with bastards.
We may define sociopathy, informally, as attempting to achieve evolutionary ends by means that harm others in society, eg, stealing. In this case, rape and child abandonment are sociopathic ways of increasing men’s reproductive success at the expense of other people. (Note that sociopathy doesn’t have a formal definition and I am using it here as a tool, not a real diagnosis. If someone has a better term, I’m happy to use it.)
This is, again, quite obvious–everyone knows that men are much more likely than women to be imprisoned for violent acts, rape included. Men are also more likely than women to try to skip out on their child support payments.
Note that this “sociopathy” is not necessarily a mental illness, (a true illness ought to make a dent on one’s evolutionary success.) Genghis Khan raped a lot of women, and it turned out great for his genes. It is simply a reproductive strategy that harms other people.
So what does female sociopathy look like?
It can’t look like male sociopathy, because child abandonment decreases a woman’s fertility. For a woman, violence and abandonment would be signs of true mental defects. Rather, we want to look at ways women improve their chances of reproductive success at the expense of others.
In other words, female sociopathy involves manipulating or coercing others into providing resources for her children.
But it’s getting late; let’s continue with part 2 on Monday. (Wednesday is book club.)
Buzzwords like “the male gaze” “objectification” “stereotype threat” “structural oppression” “white privilege” etc. are all really just re-hashings of the Evil Eye. We’ve shed the formal structure of religion but not the impulse for mystical thinking.
Today while debating with a friend about whether men or women have it better, it became plain that we were approaching the question from very different perspectives. He looked at men’s higher incomes and over-representation among CEOs and government officials and saw what I’ll call the mystical explanation: male oppression of women. I looked at the same data plus male over-representation among the homeless, mentally ill, suicides, and murder victims, and advocated the scientific explanation: greater male variability.
What do I mean by mystical?
In primitive tribes, an accusation of witchcraft can quickly get you killed. What might inspire an accusation of witchcraft? A sick cow, a sudden death, a snake in a spot where it wasn’t yesterday, a drought, a flood, a twisted ankle–pretty much anything unexpected or unfortunate.
People understand cause and effect. Things happen because other things make them happen. But without a good scientific understanding of the world, the true causes of many events are unfindable, so people turn to mystical explanations. Why does it rain? Because a goddess is weeping. Why do droughts happen? Because someone forgot to make a sacrifice and angered the gods. Why do people get sick and die? Because other people cursed them.
A curse need not be deliberate. Simply being mad at someone or bearing them ill-will might be enough trigger the Evil Eye, curse them, and be forced by angry villagers to undo the curse–however the witchdoctor determines the curse must be undone. (This can be quite expensive.)
In animist thinking, things do not just happen. Things happen for reasons–usually malicious reasons.
The death of a companion via snakebite (probably a common occurrence among people who walk barefoot in Australia) triggered a brutal “revenge” killing once it was determined who had cast the curse that motivated the snake:
“The cause of this sudden unprovoked cruelty was not, as usual, about the women, but because the man who had been killed by the bite of the snake belonged to the hostile tribe, and they believed my supposed brother-in-law carried about with him something that had occasioned his death. They have all sorts of fancies of this kind, and it is frequently the case, that they take a man’s kidneys out after death, tie them up in something, and carry them round the neck, as a sort of protection and valuable charm, for either good or evil.”
Buckley’s adoptive Aboriginal family, his sister and brother-in-law, who had been helping him since the tribe saved his life years ago, was killed in this incident.
“I should have been most brutally unfeeling, had I not suffered the deepest mental anguish from the loss of these poor people, who had all along been so kind and good to me. I am not ashamed to say, that for several hours my tears flowed in torrents, and, that for a long time I wept unceasingly. To them, as I have said before, I was as a living dead brother, whose presence and safety was their sole anxiety. Nothing could exceed the kindness these poor natives had shown me, and now they were dead, murdered by the band of savages I saw around me, apparently thirsting for more blood. Of all my sufferings in the wilderness, there was nothing equal to the agony I now endured.” …
“I returned to the scene of the brutal massacre; and finding the ashes and bones of my late friends, I scraped them up together, and covered them over with turf, burying them in the best manner I could, that being the only return I could make for their many kindnesses. I did so in great grief at the recollection of what they had done for me through so many years, and in all my dangers and troubles. ”
An account of Florence Young’s missionary work in the Solomon Islands (which are near Australia) recounts an identical justification for the cycle of violence on the Solomon Islands (which was quite threatening to Florence herself.) Every time someone died of any natural cause, their family went to the local witch doctor, who then used magic to determine who had used evil magic to kill the dead guy, and then the family would go and kill whomever the witch doctor indicated.
The advent of Christianity therefore caused a power struggle between the missionaries and the witch doctors, who were accustomed to being able to extort everyone and trick their followers into killing anyone who pissed them off. (See also Isaac Bacirongo’s account of the witch doctor who extorted his pre-pubescent sister as payment for a spell intended to kill Isaac’s wife–note: Isaac was not the one buying this spell; he likes his wife.)
So why do women make less money than men? Why are they underrepresented among CEOs and Governors and mathematicians? Something about the patriarchy and stereotype threat; something about men being evil.
Frankly, it sounds like men have the Evil Eye. A man thinks “Women are worse at math” and women suddenly become worse at math.
To be fair, my friend had only half the data, and when you have only half the data, the situation for men looks a lot better than the situation for women. But men aren’t only over-represented at the high ends of achievement–they’re also over-represented at the bottom. If patriarchy and stereotypes keep women from getting PhDs in math, why are little boys over-represented in special ed classes? Why are they more likely to be homeless, schizophrenic, commit suicide, or be murdered? Neither patriarchy nor male privilege can explain such phenomena.
Biology supplies us with a totally different explanation: greater male variability.
To review genetics, you have 23 pairs of chromosomes. Most of them are roughly X-shaped, except for the famous Y chromosome.
You have two chromosomes because you received one from each of your parents. Much of what the chromosomes do is redundant–for example, if you have blue eyes, then you received a gene for blue eyes from one parent and one from your other parent. One blue eye gene would be enough to give you blue eyes, but you have two.
Eye color isn’t terribly important, but things like how your immune system responds to threats or how your blood clots are. A rare mutation might make you significantly better or worse at these things, but the fact that you have two (or more) genes controlling each trait means that each very rare mutation tends to be paired with a more common version–lessening its effect.
There is, however, one big exception: the XY pair. Men don’t have a pair of Xs or a pair of Ys; they have one of each. If something is wrong on the X, the Y may have nothing to fix it, and vice versa.
The upshot is that if a man happens to get a gene that makes him extra tall, smart, conscientious, creative, charismatic, etc. somewhere on his X or Y chromosomes, he may not have a corresponding gene on the other chromosome to moderate its effects–and if he has a gene that makes him extra short, dumb, impulsive, dull, or anti-social, he is still unlikely to have a corresponding gene to dull the effect.
Height is an uncontroversial example. Yes, the average man is taller than the average woman, but the spread of male heights is wider than the spread of female heights. More women are clustered around the average female height, while more men are both taller than the average man and shorter than the average man.
The graph to the right shows test scores from the Armed Services Vocational Aptitude Battery, but it shows the same basic idea: different means with women clustered more closely around average than men.
Whether the greater male variability hypothesis is true or not, it is an explanation that assumes no malice on anyone’s part. No one is maliciously forcing little boys into special ed, nor grown men into homelessness and suicide. The architecture of the XY and XX chromosome pairs is simply part of how humans are constructed.
But notice that you are much more likely to hear the theory that uses mysticism to blame people than the theory that doesn’t. One is tempted to think that some people are just inclined to assume that others are malicious–while ignoring other, more mundane explanations.
The other day I was walking through the garden when I looked down, saw one of these, leapt back, screamed loudly enough to notify the entire neighborhood:
(The one in my yard was insect free, however.)
After catching my breath, I wondered, “Is that a wasp nest or a beehive?” and crept back for a closer look. Wasp nest. I mentally paged through my knowledge of wasp nests: wasps abandon nests when they fall on the ground. This one was probably empty and safe to step past. I later tossed it onto the compost pile.
The interesting part of this incident wasn’t the nest, but my reaction. I jumped away from the thing before I had even consciously figured out what the nest was. Only once I was safe did I consciously think about the nest.
Gazzaniga discusses a problem faced by brains trying to evolve to be bigger and smarter: how do you get more neurons working without taking up an absurd amount of space connecting each and every neuron to every other neuron?
Imagine a brain with 5 connected neurons: each neuron requires 4 connections to talk to every other neuron. A 5 neuron brain would thus need space for 10 total connections.
The addition of a 6th neuron would require 5 new connections; a 7th neuron requires 6 new connections, etc. A fully connected brain of 100 neurons would require 99 connections per neuron, for a total of 4,950 connections.
Connecting all of your neurons might work fine if if you’re a sea squirt, with only 230 or so neurons, but it is going to fail hard if you’re trying to hook up 86 billion. The space required to hook up all of these neurons would be massively larger than the space you can actually maintain by eating.
So how does an organism evolving to be smarter deal with the connectivity demands of increasing brain size?
Human social lives suggest an answer: Up on the human scale, one person can, Dunbar estimates, have functional social relationships with about 150 other people, including an understanding of those people’s relationships with each other. 150 people (the “Dunbar number”) is therefore the amount of people who can reliably cooperate or form groups without requiring any top-down organization.
So how do humans survive in groups of a thousand, a million, or a billion (eg, China)? How do we build large-scale infrastructure projects requiring the work of thousands of people and used by millions, like interstate highways? By organization–that is, specialization.
In a small tribe of 150 people, almost everyone in the tribe can do most of the jobs necessary for the tribe’s survival, within the obvious limits of biology. Men and women are both primarily occupied with collecting food. Both prepare clothing and shelter; both can cook. There is some specialization of labor–obviously men can carry heavier loads; women can nurse children–but most people are generally competent at most jobs.
In a modern industrial economy, most people are completely incompetent at most jobs. I have a nice garden, but I don’t even know how to turn on a tractor, much less how to care for a cow. The average person does not know how to knit or sew, much less build a house, wire up the electricity and lay the plumbing. We attend school from 5 to 18 or 22 or 30 and end up less competent at surviving in our own societies than a cave man with no school was in his, not because school is terrible but because modern industrial society requires so much specialized knowledge to keep everything running that no one person can truly master even a tenth of it.
Specialization, not just of people but of organizations and institutions, like hospitals devoted to treating the sick, Walmarts devoted to selling goods, and Microsoft devoted to writing and selling computer software and hardware, lets society function without requiring that everyone learn to be a doctor, merchant, and computer expert.
Similarly, brains expand their competence via specialization, not denser neural connections.
The smartest people may boast more neurons than those of average intelligence, but their brains have fewer neural connections…
Neuroscientists in Germany recruited 259 participants, both men and women, to take IQ tests and have their brains imaged…
The research revealed a strong correlation between the number of dendrites in a person’s cerebral cortex and their intelligence. The smartest participants had fewer neural connections in their cerebral cortex.
Fewer neural connections overall allows different parts of the brain to specialize, increasing local competence.
All things are produced more plentifully and easily and of a better quality when one man does one thing that is natural to him and does it at the right time, and leaves other things. –Plato, The Republic
The brains of mice, as Gazzinga discusses, do not need to be highly specialized, because mice are not very smart and do not do many specialized activities. Human brains, by contrast, are highly specialized, as anyone who has ever had a stroke has discovered. (Henry Harpending of West Hunter, for example, once had a stroke while visiting Germany that knocked out the area of his brain responsible for reading, but since he couldn’t read German in the first place, he didn’t realize anything was wrong until several hours later.)
I read, about a decade ago, that male and female brains have different levels, and patterns, of internal connectivity. (Here and here are articles on the subject.) These differences in connectivity may allow men and women to excel at different skills, and since we humans are a social species that can communicate by talking, this allows us to take cognitive modality beyond the level of a single brain.
So modularity lets us learn (and do) more things, with the downside that sometimes knowledge is highly localized–that is, we have a lot of knowledge that we seem able to access only under specific circumstances, rather than use generally.
For example, I have long wondered at the phenomenon of people who can definitely do complicated math when asked to, but show no practical number sense in everyday life, like the folks from the Yale Philosophy department who are confused about why African Americans are under-represented in their major, even though Yale has an African American Studies department which attracts a disproportionate % of Yale’s African American students. The mathematical certainty that if any major in the whole school that attracts more African American students, then other majors will end up with fewer, has been lost on these otherwise bright minds.
Yalies are not the only folks who struggle to use the things they know. When asked to name a book–any book–ordinary people failed. Surely these people have heard of a book at some point in their lives–the Bible is pretty famous, as is Harry Potter. Even if you don’t like books, they were assigned in school, and your parents probably read The Cat in the Hat and Green Eggs and Ham to you when you were a kid. It is not that they do not have the knowledge as they cannot access it.
Teachers complain all the time that students–even very good ones–can memorize all of the information they need for a test, regurgitate it all perfectly, and then turn around and show no practical understanding of the information at all.
Richard Feynman wrote eloquently of his time teaching future science teachers in Brazil:
In regard to education in Brazil, I had a very interesting experience. I was teaching a group of students who would ultimately become teachers, since at that time there were not many opportunities in Brazil for a highly trained person in science. These students had already had many courses, and this was to be their most advanced course in electricity and magnetism – Maxwell’s equations, and so on. …
I discovered a very strange phenomenon: I could ask a question, which the students would answer immediately. But the next time I would ask the question – the same subject, and the same question, as far as I could tell – they couldn’t answer it at all! For instance, one time I was talking about polarized light, and I gave them all some strips of polaroid.
Polaroid passes only light whose electric vector is in a certain direction, so I explained how you could tell which way the light is polarized from whether the polaroid is dark or light.
We first took two strips of polaroid and rotated them until they let the most light through. From doing that we could tell that the two strips were now admitting light polarized in the same direction – what passed through one piece of polaroid could also pass through the other. But then I asked them how one could tell the absolute direction of polarization, for a single piece of polaroid.
They hadn’t any idea.
I knew this took a certain amount of ingenuity, so I gave them a hint: “Look at the light reflected from the bay outside.”
Nobody said anything.
Then I said, “Have you ever heard of Brewster’s Angle?”
“Yes, sir! Brewster’s Angle is the angle at which light reflected from a medium with an index of refraction is completely polarized.”
“And which way is the light polarized when it’s reflected?”
“The light is polarized perpendicular to the plane of reflection, sir.” Even now, I have to think about it; they knew it cold! They even knew the tangent of the angle equals the index!
I said, “Well?”
Still nothing. They had just told me that light reflected from a medium with an index, such as the bay outside, was polarized; they had even told me which way it was polarized.
I said, “Look at the bay outside, through the polaroid. Now turn the polaroid.”
“Ooh, it’s polarized!” they said.
After a lot of investigation, I finally figured out that the students had memorized everything, but they didn’t know what anything meant. When they heard “light that is reflected from a medium with an index,” they didn’t know that it meant a material such as water. They didn’t know that the “direction of the light” is the direction in which you see something when you’re looking at it, and so on. Everything was entirely memorized, yet nothing had been translated into meaningful words. So if I asked, “What is Brewster’s Angle?” I’m going into the computer with the right keywords. But if I say, “Look at the water,” nothing happens – they don’t have anything under “Look at the water”!
The students here are not dumb, and memorizing things is not bad–memorizing your times tables is very useful–but they have everything lodged in their “memorization module” and nothing in their “practical experience module.” (Note: I am not necessarily suggesting that thee exists a literal, physical spot in the brain where memorized and experienced knowledge reside, but that certain brain structures and networks lodge information in ways that make it easier or harder to access.)
People frequently make arguments that don’t make logical sense when you think them all the way through from start to finish, but do make sense if we assume that people are using specific brain modules for quick reasoning and don’t necessarily cross-check their results with each other. For example, when we are angry because someone has done something bad to us, we tend to snap at people who had nothing to do with it. Our brains are in “fight and punish mode” and latch on to the nearest person as the person who most likely committed the offense, even if we consciously know they weren’t involved.
Political discussions are often marred by folks running what ought to be logical arguments through status signaling, emotional, or tribal modules. The desire to see Bad People punished (a reasonable desire if we all lived in the same physical community with each other) interferes with a discussion of whether said punishment is actually useful, effective, or just. For example, a man who has been incorrectly convicted of the rape of a child will have a difficult time getting anyone to listen sympathetically to his case.
In the case of white South African victims of racially-motivated murder, the notion that their ancestors did wrong and therefore they deserve to be punished often overrides sympathy. As BBC notes, these killings tend to be particularly brutal (they often involve torture) and targeted, but the South African government doesn’t care:
According to one leading political activist, Mandla Nyaqela, this is the after-effect of the huge degree of selfishness and brutality which was shown towards the black population under apartheid. …
Virtually every week the press here report the murders of white farmers, though you will not hear much about it in the media outside South Africa.In South Africa you are twice as likely to be murdered if you are a white farmer than if you are a police officer – and the police here have a particularly dangerous life. The killings of farmers are often particularly brutal. …
Ernst Roets’s organisation has published the names of more than 2,000 people who have died over the last two decades. The government has so far been unwilling to make solving and preventing these murders a priority. …
There used to be 60,000 white farmers in South Africa. In 20 years that number has halved.
The Christian Science Monitor reports on the measures ordinary South Africans have to take in what was once a safe country to not become human shishkabobs, which you should pause and read, but is a bit of a tangent from our present discussion. The article ends with a mind-bending statement about a borrowed dog (dogs are also important for security):
My friends tell me the dog is fine around children, but is skittish around men, especially black men. The people at the dog pound told them it had probably been abused. As we walk past house after house, with barking dog after barking dog, I notice Lampo pays no attention. Instead, he’s watching the stream of housekeepers and gardeners heading home from work. They eye the dog nervously back.
Great, I think, I’m walking a racist dog.
Module one: Boy South Africa has a lot of crime. Better get a dog, cover my house with steel bars, and an extensive security system.
Module two: Associating black people with crime is racist, therefore my dog is racist for being wary of people who look like the person who abused it.
And while some people are obviously sympathetic to the plight of murdered people, “Cry me a river White South African Colonizers” is a very common reaction. (Never mind that the people committing crimes in South Africa today never lived under apartheid; they’ve lived in a black-run country for their entire lives.) Logically, white South Africans did not do anything to deserve being killed, and like the golden goose, killing the people who produce food will just trigger a repeat of Zimbabwe, but the modes of tribalism–“I do not care about these people because they are not mine and I want their stuff”–and punishment–“I read about a horrible thing someone did, so I want to punish everyone who looks like them”–trump logic.
Who dies–and how they die–significantly shapes our engagement with the news. Gun deaths via mass shootings get much more coverage and worry than ordinary homicides, even though ordinary homicides are far more common. homicides get more coverage and worry than suicides, even though suicides are far more common. The majority of gun deaths are actually suicides, but you’d never know that from listening to our national conversation about guns, simply because we are biased to worry far more about other people killng us than about ourselves.
Similarly, the death of one person via volcano receives about the same news coverage as 650 in a flood, 2,000 in a drought, or 40,000 in a famine. As the article notes:
Instead of considering the objective damage caused by natural disasters, networks tend to look for disasters that are “rife with drama”, as one New York Times article put it4—hurricanes, tornadoes, forest fires, earthquakes all make for splashy headlines and captivating visuals. Thanks to this selectivity, less “spectacular” but often times more deadly natural disasters tend to get passed over. Food shortages, for example, result in the most casualties and affect the most people per incident5 but their onset is more gradual than that of a volcanic explosion or sudden earthquake. … This bias for the spectacular is not only unfair and misleading, but also has the potential to misallocate attention and aid.
There are similar biases by continent, with disasters in Africa receiving less attention than disasters in Europe (this correlates with African disasters being more likely to be the slow-motion famines, epidemics and droughts that kill lots of people, and European disasters being splashier, though perhaps we’d consider famines “splashier” if they happened in Paris instead of Ethiopia.)
From a neuropolitical perspective, I suspect that patterns such as the Big Five personality traits correlating with particular political positions (“openness” with “liberalism,” for example, or “conscientiousness” with “conservativeness,”) is caused by patterns of brain activity that cause some people to depend more or less on particular brain modules for processing.
For example, conservatives process more of the world through the areas of their brain that are also used for processing disgust, (not one of “the five” but still an important psychological trait) which increases their fear of pathogens, disease vectors, and generally anything new or from the outside. Disgust can go so far as to process other people’s faces or body language as “disgusting” (eg, trans people) even when there is objectively nothing that presents an actual contamination or pathogenic risk involved.
Similarly, people who feel more guilt in one area of their life often feel guilt in others–eg, “White guilt was significantly associated with bulimia nervosa symptomatology.” The arrow of causation is unclear–guilt about eating might spill over into guilt about existing, or guilt about existing might cause guilt about eating, or people who generally feel guilty about everything could have both. Either way, these people are generally not logically reasoning, “Whites have done bad things, therefore I should starve myself.” (Should veganism be classified as a politically motivated eating disorder?)
I could continue forever–
Restrictions on medical research are biased toward preventing mentally salient incidents like thalidomide babies, but against the invisible cost of children who die from diseases that could have been cured had research not been prevented by regulations.
America has a large Somali community but not Congolese, (85,000 Somalis vs. 13,000 Congolese, of whom 10,000 hail from the DRC. Somalia has about 14 million people, the DRC has about 78.7 million people, so it’s not due to there being more Somalis in the world,) for no particular reason I’ve been able to discover, other than President Clinton once disastrously sent a few helicopters to intervene in the eternal Somali civil war and so the government decided that we now have a special obligation to take in Somalis.
–but that’s probably enough.
I have tried here to present a balanced account of different political biases, but I would like to end by noting that modular thinking, while it can lead to stupid decisions, exists for good reasons. If purely logical thinking were superior to modular, we’d probably be better at it. Still, cognitive biases exist and lead to a lot of stupid or sub-optimal results.
As a parent, I spend much of my day attempting to “socialize” my kids–“Don’t hit your brother! Stop jumping on the couch! For the umpteenth time, ‘yeah, right!’ is sarcasm.”
There are a lot of things that don’t come naturally to little kids. Many of them struggle to understand that these wiggly lines on paper can turn into words or that tiny, invisible things on their hands can make them sick.
“Yes, you have to brush your teeth and go to bed, no, I’m not explaining why again.”
And they definitely don’t understand why I won’t let them have ice cream for dinner.
“Don’t ride your bike down the hill and into the street like that! You could get hit by a car and DIE!”
Despite all of the effort I have devoted to transforming this wiggly bunch of feral children into respectable adults (someday, I hope,) I have never found myself concerned with the task of teaching them about gender. As a practical matter, whether the children behave like “girls” or “boys” makes little difference to the running of the household, because we have both–by contrast, whether the children put their dishes away after meals and do their homework without me having to threaten or cajole them makes a big difference.
Honestly, I can’t convince them not to pick their noses in public or that broccoli is tasty, but I’m supposed to somehow subtly convince them that they’ve got to play Minecraft because they’re boys (even while explicitly saying, “Hey, you’ve been playing that for two hours, go ride your bike,” or that they’re supposed to be walking doormats because they’re girls (even while saying, “Next time he pushes you, push him back!”)
And yet the boys still act like boys, the girls like girls–statistically speaking.
“Ah,” I hear some of you saying, “But you are just one parent! How do you know there aren’t legions of other parents who are out there doing everything they can to ensure that their sons succeed and daughters fail in life?”
This is, if you will excuse me, a very strange objection. What parent desires failure from their children?
One of my kids enjoys watching YouTube cooking videos, and they’re nearly 100% women making cakes.
Women’s magazines focus exclusively on 4 topics: men, fashion, diets, and cupcakes. You might think that diets and cupcakes are incompatible, but women’s magazines believe otherwise:
Just in case it’s not clear, that is not a watermellon. It is cake, cleverly disguised as a watermellon.
(YouTube has videos that show you how to make much better cake watermellons–for starters, you want red velvet cake for the middle, not just frosting…)
Magazines specifically aimed at “people who want to make cakes” are also overwhelmingly feminine. Whether we’re talking wedding cakes or chocolate cravings, apple pastries or donuts, sweets and women just seem to go together.
If men’s magazines ever feature food, I bet they’re steak and BBQ. (*Image searches*)
The meat-related articles do appear to be a little more gender-neutral than the cupcake-related articles–probably because men don’t tend to decorate their steaks with tiny baseball bats cut out of steak the way women like to decorate their cakes with tiny flowers made out of frosting.
It’s almost as if women have some kind of overwhelming craving for fats and sugars that men don’t really share.
I was talking with a friend recently about their workplace, where, “All of the women are on diets, but none of them can stay on their diets because they are all constantly eating at their workstations.” Further inquiries revealed that yes, they are eating sweets and pastries, not cashews and carrots, and that there is some kind of “office culture” of all of the women eating pastries together.
The irony here is pretty obvious.
Even many (most?) specialty “diet” foods are designed to still taste sweet. “Fat-free” yogurt is marketed as a health food even though it has as much sugar in it as a bowl of ice cream. Women are so attracted to the taste of sweet sodas, they drink disgusting Diet Coke. Dieting websites advise us that cake topped with fruit is “healthy.”
When men diet, they think “eat nothing but protein until ketosis kicks in” sounds like a great idea. When women diet, they want fat-free icecream.
I don’t think it is just “women lack willpower.” (Or at least, not willpower in the sense of something people have much control over.) Rather, I think that men and women actually have substantially different food cravings.
So do children, for that matter.
Throughout most of human history, from hunter-gatherers to agriculturalists, the vast majority of women have specialized in obtaining (gathering, tending, harvesting,) plants. (The only exceptions are societies where people don’t eat plants, like the Inuit and the Masai, and our modern society, where most of us aren’t involved in food production.) By contrast, men have specialized in hunting, raising, and butchering animals–not because they were trying to hog the protein or had some sexist ideas about food production, but because animals tend to be bigger and heavier than women can easily lift. Dragging home and butchering large game requires significant strength.
I am inventing a “Just So” story, of course. But it seems sensible enough that each gender evolved a tendency to crave the particular kinds of foods it was most adept at obtaining.
Exercise wears down muscles; protein is necessary to build them back up. Protein fuels active lifestyles, and active lifestyles, in turn, require protein. Our male ancestors’ most important activities were most likely heavy labor (eg, building huts, hauling firewood, butchering game,) and defending the tribe. Our female ancestors’ most important activities were giving birth and nursing children (we would not exist had they not, after all.) For these activities, women want to be fat. It’s not good enough to put on weight after you get pregnant, when the growing fetus is already dependent on its mother for nutrients. Far better for a woman to be plump before she gets pregnant (and to stay that way long after.)
Of course, this is “fat” by historical standards, not modern American standards.
I suspect, therefore, that women are naturally inclined to eat as much as possible of sweet foods in order to put on weight in preparation for pregnancy and lactation–only today, the average woman has 2 pregnancies instead of 12, and so instead of turning that extra weight into children and milk, it just builds up.
Obviously we are talking about a relatively small effect on food preferences, both because our ancestors could not afford to be too picky about what they ate, and because the genetic difference between men and women is slight–not like the difference between humans and lizards, say.
Interestingly, gender expression in humans appears to basically be female by default. If, by random chance, you are born with only one X chromosome, (instead of the normal XX or XY,) you can still survive. Sure, you’ll be short, you probably won’t menstruate, and you’ll likely have a variety of other issues, but you’ll be alive. By contrast, if you received only a Y chromosome from your parents and no accompanying X, you wouldn’t be here reading this post. You can’t survive with just a Y. Too many necessary proteins are encoded on the X.
Gender differences show up even in fetuses, but don’t become a huge deal until puberty, when the production of androgens and estrogens really cranks up.
Take muscle development: muscle development relies on the production of androgens (eg, testosterone.) Grownups produce more androgens than small children, and men produce more than women. Children can exercise and certainly children who do daily farm chores are stronger than children who sit on their butts watching TV all day, but children can’t do intense strength-training because they just don’t produce enough androgens to build big muscles. Women, likewise, produce fewer androgens, and so cannot build muscles at the same rate as men, though obviously they are stronger than children.
At puberty, boys begin producing the androgens that allow them to build muscles and become significantly stronger than girls.
Sans androgens, even XY people develop as female. (See Androgen Insensitivity Syndrome, in which people with XY chromosomes cannot absorb the androgens their bodies create, and so develop as female.) Children produce some androgens (obviously,) but not nearly as many as adults. Pre-pubescent boys, therefore, are more “feminine,” biologically, than post-pubescent men; puberty induces maleness.
All children seem pretty much obsessed with sweets, far more than adults. If allowed, they will happily eat cake until they vomit.
Even though food seems like a realm where evolution would heavily influence our tastes, it’s pretty obvious that culture has a huge effect. I doubt Jews have a natural aversion to pork or Hindus to beef. Whether you think chicken hearts are tasty or vomitous is almost entirely dependent on whether or not they are a common food in your culture.
But small children are blissfully less attuned to culture than grownups. Like little id machines, they spit out strained peas and throw them on the floor. They do not care about our notion that “vegetables are good for you.” This from someone who’ll eat bird poop if you let them.
The child’s affection for sweets, therefore, I suspect is completely natural and instinctual. Before the invention of refined sugars and modern food distribution systems, it probably kept them alive and healthy. Remember that the whole reason grownups try to eat more vegetables is that vegetables are low in calories. Grownups have larger stomachs and so can eat more than children, allowing them to extract adequate calories from low-calorie foods, but small children do not and cannot. In developing countries, children still have trouble getting enough calories despite abundant food in areas where that food is low-calorie plants, which they just cannot physically eat enough of. Children, therefore, are obsessed with high-calorie foods.
At puberty, this instinct changes for boys–orienting them more toward protein sources, which they are going to have to expend a lot of energy trying to haul back to their families for the rest of their lives, but stays basically unchanged in females.
ETA: I have found two more sources/items of relevance:
When it comes to what we eat, men and women behave differently: Men consume more beef, eggs, and poultry; while women eat more fruits and vegetables and consume less fat than do men. … The gender differences in preferences for healthier foods begin in childhood. Previous literature has found that girls choose healthier food and are fonder of fruits and vegetables than are boys. Boys rated beef, processed meat, and eggs as more desirable than did girls. …
Sensory (taste) differences between the genders are the second most widely ventured explanation for the differences in food choices, although it is not clear that such genetic differences actually exist. While the popular media argue that females prefer sweetness and dislike bitterness, while males may enjoy bitterness, academic literature on this matter is less conclusive. The bitter taste receptor, gene TAS2R38, has been associated with the ability to taste PROP (6-n-propylthiouracil),
one source of genetic variation in PROP and PTC taste. Individuals who experience bitterness strongly are assumed to also experience sweetness strongly relative to those who experience PROP as only slightly bitter. While previous studies found that inherited taste-blindness to bitter compounds such as PROP may be a risk factor for obesity, this literature has been hotly disputed.
The distribution of perceived bitterness of PROP differs among women and men, as does the correlation between genetic taste measures and acceptance of sweetness. A higher percentage of women are PROP and PTC tasters, sensing bitterness above threshold. It has been suggested that women are more likely to be supertasters, or those who taste with far greater intensity than average.
(I have removed the in-line citations for ease of reading; please refer to the original if you want them.)
Well, I don’t remember where this graph came from, but it looks like my intuitions were pretty good. males and females both have very low levels of testosterone during childhood, and duing puberty their levels become radically different.
Today’s selection, Homicide, is ev psych with a side of anthropology; I am excerpting the chapter on people-who-murder-children. (You are officially forewarned.)
Way back in middle school, I happened across (I forget how) my first university-level textbook, on historical European families and family law. I got through the chapter on infanticide before giving up, horrified that enough Germans were smushing their infants under mattresses or tossing them into the family hearth that the Holy Roman Empire needed to be laws specifically on the subject.
It was a disillusioning moment.
Daly and Wilson’s Homicide, 1988, contributes some (slightly) more recent data to the subject, (though of course it would be nice to have even more recent data.
(I think some of the oddities in # of incidents per year may be due to ages being estimated when the child’s true age isn’t known, eg, “headless torso of a boy about 6 years old found floating in the Thames.”)
We begin with a conversation on the subject of which child parents would favor in an emergency:
If parental motives are such as to promote the parent’s own fitness, then we should expect that parents will often be inclined to act so that neither sibling’s interests prevail completely. Typically, parental imposition of equity will involve supporting the younger, weaker competitor, even when the parent would favor the older if forced to choose between the two. It is this latter sort of situation–“Which do you save when one must be sacrificed?”–in which parents’ differential valuation of their children really comes to the fore. Recall that there were 11 societies in the ethnographic review of Chapter 3 for which it was reported that a newborn might be killed if the birth interval were too short or the brood too numerous. It should come as no surprise that there were no societies in which the prescribed solution to such a dilemma was said to be the death of an older child. … this reaction merely illustrates that one takes for granted the phenomenon under discussion, namely the gradual deepening of parental commitment and love.
*Thinks about question for a while* *flails* “BUT MY CHILDREN ARE ALL WONDERFUL HOW COULD I CHOSE?” *flails some more*
That said, I think there’s an alternative possibility besides just affection growing over time: the eldest child has already proven their ability to survive; an infant has not. The harsher the conditions of life (and thus, the more likelihood of actually facing a real situation in which you genuinely don’t have enough food for all of your children,) the higher the infant mortality rate. The eldest children have already run the infant mortality gauntlet and so are reasonably likely to make it to adulthood; the infants still stand a high chance of dying. Sacrificing the child you know is healthy and strong for the one with a high chance of dying is just stupid.
Whereas infant mortality is not one of my personal concerns.
Figure 4.4 shows that the risk of parental homicide is indeed a declining function of the child’s age. As we wold anticipate, the most dramatic decrease occurs between infants and 1-year-old children. One reason for expecting this is that the lion’s share of the prepubertal increase in reproductive value in natural environments occurs within the first year.
(I think “prepubertal increase in reproductive value” means “decreased likelihood of dying.”)
Moreover, if parental disinclination reflects any sort of assessment of the child’s quality or the mother’s situation, then an evolved assessment mechanisms should be such as to terminate any hopeless reproductive episode as early as possible, rather than to squander parental effort in an enterprise that will eventually be abandoned. … Mothers killed 61 in the first 6 months compared to just 27 in the second 6 months. For fathers, the corresponding numbers are 24 vs. 14. [See figure 4.4] … This pattern of victimization contrasts dramatically with the risk of homicide at the hands of nonrelatives (Figure 4.5)…
I would like to propose an alternative possibility: just as a child who attempts to drive a car is much more likely to crash immediately than to successfully navigate onto the highway and then crash, so a murderous person who gets their hands onto a child is more likely to kill it immediately than to wait a few years.
A similar mechanism may be at play in the apparent increase and then decrease in homicides of children by nonrelatives during toddlerhood. Without knowing anything about these cases, I can only speculate, but 1-4 are the ages when children are most commonly put into daycares or left with sitters while their moms return to work. The homicidally-minded among these caretakers, then, are likely to kill their charges sooner rather than later. (School-aged children, by contrast, are both better at running away from attackers and highly unlikely to be killed by their teachers.)
Teenagers are highly conflictual creatures, and the rate at which nonrelatives kill them explodes after puberty. When we consider the conspicuous, tempestuous conflicts that occur between teenagers and their parents–conflicts that apparently dwarf those of the preadolescent period–it is all the more remarkable that the risk of parental homicide continues its relentless decline to near zero.
… When mothers killed infants, the victims had been born to them at a mean age of 22.7 years, whereas older victims had been born at a mean maternal age of 24.5. Thi is a significant difference, but both means are signficantly below the 25.8 year that was the average age of all new Candian mothers during the same period, accoding to Cadian Vital Statistics.
In other words, impulsive fuckups who get accidentally pregnant are likely to be violent impulsive fuckups.
We find a similar result with respect to marital status: Mothers who killed older children are again intermediate between infanticidal women and the population-at-large. Whereas 51% of mothers committing infanticide were unmarried, the same was true of just 34% of those killing older children. This is still substantially above the 12% of Canadian births in which the new mother was unmarried …
Killing of an older child is often associated with maternal depression. Of the 95 mothers who killed a child beynd its infancy, 15.8% also committed suicide. … By contrast, only 2 of 88 infanticidal mothers committed suicide (and even this meager 2.3% probably overestimates the assocation of infanticide with suicide, since infanticides are the only category of homicides in which a significant incidence of undetected cases is likely.) … one of thee 2 killed three older children as well.
In the Canadian data, it is also noteworthy that 35% of maternal infanticides were attributed by the investigating police force … [as] “mentally ill or mentally retarded (insane),” verses 58% of maternal homicides of older children. Here and elsewhere, it seems that the sots of cases that are simultaneously rare and seemingly contrary to the actor’s interests–in both the Darwinian and the commonsense meaning of interest–also happen t be the sorts of cases most likely to be attributed to some sort of mental incompetence. … We identify as mad those people who lack a species-typical nepotistic perception of their interests or who no longer care to pursue them. …
Violent people go ahead and kill their kids; people who go crazy later kill theirs later.
We do at least know the ages of the 38 men who killed heir infant children: the mean was 26.3 years. Moreover, we know that fathers averaged 4 years older than mothers for that substantial majority of Canadian births that occurred within marriages… . Since the mean age for all new Canadian mothers during the relevant period… was 25.8, it seems clear that infanticidal fathers are indeed relatively young. And as was the case with mothers, infanticidal fathers were significantly younger than those fathers who killed older offspring. (mean age at the victim’s birth = 29.2 years). …
As with mothers, fathers who killed older children killed themselves as well significantly more often (43.6% of 101) than did those who killed their infant children (10.5% of 38). Also like mothers is the fact that those infanticidal fathers who did commit suicide were significantly older (mean age = 30.5 years) than those who did not (mean = 25.8). Likewise, the paternal age at which older victims had been born was also significantly greater for suicidal (mean = 31.1 years; N = 71) than for nonsuicidal (mean =27.5; N = 67) homicidal fathers. And men who killed their older children were a little more likely to be deemed mentally incompetent (20.8%) than those who killed their infants (15.8%). …
Fathers, however, were significantly less likely to commit suicide after killing an adult offspring (19% of 21 men) than a child (50% of 80 men.) … 20 of the 22 adult victims of their father were sons… three of the four adult victims of mothers were daughters. … There is no hint of such a same-ex bias in the killings of either infants… or older children. …
An infrequent but regular variety of homicide is that in which a man destroys his wife and children. A corresponding act of familicide by the wife is almost unheard of. …
No big surprises in this section.
Perhaps the most obvious prediction from a Darwinian view of parental motives is this: Substitute parents will generally tend to care less profoundly for their children than natural parents, with the result that children reared by people other than their natural parents will be more often exploited and otherwise at risk. Parental investment is a precious resource, and selection must favor those parental psyches that do not squander it on nonrelatives.
Disclaimer: obviously there are good stepparents who care deeply for their stepchilden. I’ve known quite a few. But I’ve also met some horrible stepparents. Given the inherent vulnerability of children, I find distasteful our society’s pushing of stepparenting as normal without cautions against its dangers. In most cases, remarriage seems to be undertaken to satisfy the parent, not the child.
In an interview study of stepparents in Cleveland, Ohio, for example–a study of predominantly middle-class group suffering no particular distress or dysfunction–Loise Duberman (1975) found that only 53% of stepfathers and 25% of stepmothers could claim to have “parental feeling” toward their stepchildren, and still fewer to “love” them.
Some of this may be influenced by the kinds of people who are likely to become stepparents–people with strong family instincts probably have better luck getting married to people like themselves and staying that way than people who are bad at relationships.
In an observational study of Trinidadian villagers, Mark Flinn (1988) found that stepfathers interacted less with “their” children than did natural fathers; that interactions were more likely to be aggressive within steprelationships than within the corresponding natural relationships; and that stepchildren left home at an earlier age.
Pop psychology and how-to manuals for stepfamilies have become a growth industry. Serious study of “reconstituted” families is also burgeoning. Virtually all of this literature is dominated by a single theme: coping with the antagonisms…
Here the authors stops to differentiate between between stepparenting and adoption, which they suspect is more functional due to adoptive parents actually wanting to be parents in the first place. However,
such children have sometimes been found to suffer when natural children are subsequently born to the adopting couple, a result that has led some professionals to counsel against adoption by childless couples until infertility is definitely established. …
Continuing on with stepparents:
The negative characterization of stepparents is by no means peculiar to our culture. … From Eskimos to Indonesians, through dozens of tales, the stepparent is the villain of every piece. … We have already encountered the Tikopia or Yanomamo husband who demands the death of his new wife’s prior children. Other solutions have included leaving the children with postmenopausal matrilineal relatives, and the levirate, a wide-spread custom by which a widow and her children are inherited by the dead man’s brother or other near relative. …
Social scientists have turned this scenario on its head. The difficulties attending steprelationships–insofar as they are acknowledged at all–are presumed to be caused by the “myth of the cruel stepparent” and the child’s fears.
Why this bizarre counterintuitive view is the conventional wisdom would be a topic for a longer book than this; suffice to say that the answer surely has more to do with ideology than with evidence. In any event, social scientists have staunchly ignored the question of the factual basis for the negative “stereotyping” of stepparents.
Under Freud’s logic, all sorts of people who’d been genuinely hurt by others were summarily dismissed, told that they were the ones who actually harbored ill-will against others and were just “projecting” their emotions onto their desired victims.
Freudianism is a crock of shit, but in this case, it helped social “reformers” (who of course don’t believe in silly ideas like evolution) discredit people’s perfectly reasonable fears in order to push the notion that “family” doesn’t need to follow traditional (ie, biological) forms, but can be reinvented in all sorts of novel ways.
So are children at risk in stepparent homes in contemporary North America? [see Figures 4.7 and 4.8.] … There is … no appreciable statistical confounding between steprelationships and poverty in North America. … Stepparenthood per se remains the single most powerful risk factor for child abuse that has yet been identified. (here and throughout this discussion “stepparents” include both legal and common-law spouses of the natural parent.) …
Speaking of Figures 4.7 and 4.8, I must say that the kinds of people who get divorced (or were never married) and remarried within a year of their kid’s birth are likely to be unstable people who tend to pick particularly bad partners, and the kinds of people willing to enter into a relationship with someone who has a newborn is also likely to be, well, unusual. Apparently homicidal.
By contrast, the people who are willing to marry someone who already has, say, a ten year old, may be relatively normal folks.
Just how great an elevation of risk are we talking about? Our efforts to answer that question have been bedeviled by a lack of good information in the living arrangements of children in the general population. … there are no official statistics [as of when this was written] on the numbers of children of each age who live in each household type. There is no question that the 43% of murdered American child abuse victims who dwelt with substitute parents is far more than would be expected by chance, but estimates of that expected percentage can only be derived from surveys that were designed to answer other questions. For a random sample of American children in 1976, … the best available national survey… indicates that only about 1% or fewer would be expected to have dwelt with a substitute parent. An American child living with one or more substitute parents in 1976 was therefore approximately 100 times as likely to be fatally abused as a child living with natural parents only…
Results for Canada are similar. In Hamilton, Ontario in 1983, for example, 16% of child abuse victims under 5 years of age lived with a natural parent and a stepparent… Since small children very rarely have stepparents–less than 1% of preschoolers in Hamilton in 1983, for example–that 16% represents forty times the abuse rate for children of the same age living with natural parents. … 147 Canadian children between the ages of 1 and 4 were killed by someone in loco parentis between 1974 and 1983; 37 of those children (25.2%) were the victims of their stepparents, and another 5 (3.4%) were killed by unrelated foster parents.
…The survey shows, for example, that 0.4% of 2,852 Canadian children, aged 1-4 in 1984, lived with a stepparent. … For the youngest age group in Figure 4.9, those 2 years of age and younger, the risk from a stepparent is approximately 70 times that from a natural parent (even though the later category includes all infanticides by natural mothers.)
Now we need updated data. I wonder if abortion has had any effect on the rates of infanticide and if increased public acceptance of stepfamilies has led to more abused children or higher quality people being willing to become stepparents.
Honestly, left to my own devices, I wouldn’t own a TV. (With Mythbusters canceled, what’s the point anymore?)
Don’t get me wrong. I have watched (and even enjoyed) the occasional sitcom. I’ve even tried watching football. I like comedies. They’re funny. But after they end, I get that creeping feeling of emptiness inside, like when you’ve eaten a bowl of leftover Halloween candy instead of lunch. There is no “meat” to these programs–or vegan-friendly vegetable protein, if you prefer.
I do enjoy documentaries, though I often end up fast-forwarding through large chunks of them because they are full of filler shots of rotating galaxies or astronomers parking their telescopes or people… taalkiiing… sooo… sloooowwwwlllly… And sadly, if you’ve seen one documentary about ancient Egypt, you’ve seen them all.
Ultimately, time is a big factor: I am always running short. Once I’m done with the non-negotiables (like “take care of the kids” and “pay the bills,”) there’s only so much time left, and time spent watching TV is time not spent writing. Since becoming a competent writer is one of my personal goals, TV gets punted to the bottom of the list, slightly below doing the dishes.
Obviously not everyone writes, but I have a dozen other backup projects for when I’m not writing, everything from “read more books” to “volunteer” to “exercise.”
I think it is a common fallacy to default to assuming that other people are like oneself. I default to assuming that other people are time-crunched, running on 8 shots of espresso and trying to cram in a little time to read Tolstoy and get the tomatoes planted before they fall asleep. (And I’m not even one of those Type-A people.)
Obviously everyone isn’t like me. They come home from work, take care of their kids, make dinner, and flip on the TV.
An acquaintance recently made a sad but illuminating comment regarding their favorite TV shows, “I know they’re not real, but it feels like they are. It’s like they’re my friends.”
I think the simple answer is that we process the pictures on the TV as though they were real. TV people look like people and sound like people, so who cares if they don’t smell like people? Under normal (pre-TV) circumstances, if you hung out with some friendly, laughing people every day in your living room, they were your family. You liked them, they liked you, and you were happy together.
Today, in our atomized world of single parents, only children, spinsters and eternal bachelors, what families do we have? Sure, we see endless quantities of people on our way to work, but we barely speak, nod, or glance at each other, encapsulated within our own cars or occupied with checking Facebook on our cellphones while the train rumbles on.
As our connections to other people have withered away, we’ve replaced them with fake ones.
OZZIE & HARRIET: The Adventures of America’s Favorite Family
The Adventures of Ozzie and Harriet was the first and longest-running family situational comedy in television history. The Nelsons came to represent the idealized American family of the 1950s – where mom was a content homemaker, dad’s biggest decision was whether to give his sons the keys to the car, and the boys’ biggest problem was getting a date to the high school prom. …When it premiered, Ozzie & Harriet: The Adventures of America’s Favorite Family was the highest-rated documentary in A&E’s history.
(According to Wikipedia, Ozzie and Harriet started on the radio back in the 30s, got a comedy show (still on radio) in 1944, and were on TV from 1952-1966.) It was, to some extent, about a real family–the actors in the show were an actual husband and wife + their kids, but the show itself was fictionalized.
It even makes sense to people to ask them, “Who is your favorite TV personality?“–to which the most common answer isn’t Adam Savage or James Hyneman, but Mark Harmon, who plays some made-up guy named Leroy Jethro Gibbs.
The rise of “reality TV” only makes the “people want to think of the TV people as real people they’re actually hanging out with” all the more palpable–and then there’s the incessant newsstand harping of celebrity gossip. The only thing I want out of a movie star (besides talent) is that I not recognize them; it appears that the only thing everyone else wants is that they do recognize them.
in Blockbusters: Hit-Making, Risk-Taking, and the Big Business of Entertainment,the new book by Anita Elberse, Filene professor of business administration. Elberse (el-BER-see) spent 10 years interviewing and observing film, television, publishing, and sports executives to distill the most profitable strategy for these high-profile, unpredictable marketplaces. … The most profitable business strategy, she says, is not the “long tail,” but its converse: blockbusters like Star Wars, Avatar, Friends, the Harry Potter series, and sports superstars like Tom Brady.
Strategically, the blockbuster approach involves “making disproportionately big investments in a few products designed to appeal to mass audiences,” … “Production value” means star actors and special effects. … a studio can afford only a few “event movies” per year. But Horn’s big bets for Warner Brothers—the Harry Potter series, The Dark Knight, The Hangover and its sequel, Ocean’s Eleven and its two sequels, Sherlock Holmes—drew huge audiences. By 2011, Warner became the first movie studio to surpass $1 billion in domestic box-office receipts for 11 consecutive years. …
Jeff Zucker ’86 put a contrasting plan into place as CEO at NBC Universal. In 2007 he led a push to cut the television network’s programming costs: … Silverman began cutting back on expensive dramatic content, instead acquiring rights to more reasonably priced properties; eschewing star actors and prominent TV producers, who commanded hefty fees; and authorizing fewer costly pilots for new series. The result was that by 2010, NBC was no longer the top-rated TV network, but had fallen to fourth place behind ABC, CBS, and Fox, and “was farther behind on all the metrics that mattered,” writes Elberse, “including, by all accounts, the profit margins Zucker and Silverman had sought most.” Zucker was asked to leave his job in 2010. …
From a business perspective, “bankable” movies stars like Julia Roberts, Johnny Depp, or George Clooney function in much the way Harry Potter and Superman do: providing a known, well-liked persona.
So people like seeing familiar faces in their movies (except Oprah Winfrey, who is apparently not a draw:
the 1998 film Beloved, starring Oprah Winfrey, based on Nobel Prize-winner Toni Morrison’s eponymous 1987 novel and directed by Oscar-winner Jonathan Demme … flopped resoundingly: produced for $80 million, it sold only $23 million in tickets.
Or maybe Beloved isn’t just the kind of feel-good action flick that drives movie audiences the way Batman is.)
But what about sports?
Here I am on even shakier ground than sitcoms. I can understand playing sports–they’re live action versions of video games, after all. You get to move around, exercise, have fun with your friends, and triumphantly beat them at something. (Or if you’re me, lose.) I can understand cheering for your kids and being proud of them as they get better and better at some athletic skill (or at least try hard at it.)
I don’t understand caring about strangers playing a game.
I have no friends on the Yankees or the Mets, the Phillies or the Marlins. I’ve never met a member of the Alabama Crimson Tide or the Clemson Tigers, and I harbor no illusions that my children will ever play on such teams. I feel no loyalty to the athletes-drawn-from-all-over-the-country who play on my “hometown” team, and I consider athlete salaries vaguely obscene.
I find televised sports about as interesting as watching someone do math. If the point of the game is to win, then why not just watch a 5-minute summary at the end of the day of all the teams’ wins and losses?
But according to The Way of the Blockbuster:
Perhaps no entertainment realm takes greater care in building a brand name than professional sports: fan loyalty reliably builds repeat business. “The NFL is blockbuster content,” Elberse says. “It’s the most sought-after content we have in this country. Four of the five highest-rated television shows [in the United States] ever are Super Bowls. NFL fans spend an average of 9.5 hours per week on games and related content. That gives the league enormous power when it comes to negotiating contracts with television networks.”
Elberse has studied American football and basketball and European soccer, and found that selling pro sports has much in common with selling movies, TV shows, or books. Look at the Real Madrid soccer club—the world’s richest, with annual revenues of $693 million and a valuation of $3.3 billion. Like Hollywood studios, Real Madrid attracts fan interest by engaging superstars—such as Cristiano Ronaldo, the Portuguese forward the club acquired from Manchester United for a record $131.6 million in 2009. “We think of ourselves as content producers,” a Real Madrid executive told Elberse, “and we think of our product—the match—as a movie.” As she puts it: “It might not have Tom Cruise in it, but they do have Cristiano Ronaldo starring.
In America, sports stars are famous enough that even I know some of their names, like Peyton Manning, Serena Williams, and Michel Jackson Jordan.
I think the basic drive behind people’s love of TV sports is the same as their love of sitcoms (and dramas): they process it as real. And not just real, but as people they know: their family, their tribe. Those are their boys out there, battling for glory and victory against that other tribes’s boys. It’s vicarious warfare with psuedo armies, a domesticated expression of the tribal urge to slaughter your enemies, drive off their cattle and abduct their women. So what if the army isn’t “real,” if the heroes aren’t your brother or cousin but paid gladiators shipped in from thousands of miles away to perform for the masses? Your brain still interprets it as though it were; you still enjoy it.
Continuing with yesterday’s discussion (in response to a reader’s question):
Why are people snobs about intelligence?
Is math ability better than verbal?
Do people only care about intelligence in the context of making money?
1. People are snobs. Not all of them, obviously–just a lot of them.
So we’re going to have to back this up a step and ask why are people snobs, period.
Paying attention to social status–both one’s own and others’–is probably instinctual. We process social status in our prefrontal cortexes–the part of our brain generally involved in complex thought, imagination, long-term planning, personality, not being a psychopath, etc. Our brains respond positively to images of high-status items–activating reward-feedback loop that make us feel good–and negatively to images of low-status items–activating feedback loops that make us feel bad.
…researchers asked a person if the following statement was an accurate description of themselves: “I wouldn’t hesitate to go out of my way to help someone in trouble.” Some of the participants answered the question without anyone else seeing their response. Others knowingly revealed their answer to two strangers who were watching in a room next to them via video feed. The result? When the test subjects revealed an affirmative answer to an audience, their [medial prefrontal cortexes] lit up more strongly than when they kept their answers to themselves. Furthermore, when the participants revealed their positive answers not to strangers, but to those they personally held in high regard, their MPFCs and reward striatums activated even more strongly. This confirms something you’ve assuredly noticed in your own life: while we generally care about the opinions of others, we particularly care about the opinions of people who really matter to us.
(Note what constitutes a high-status activity.)
But this alone does not prove that paying attention to social status is instinctual. After all, I can also point to the part of your brain that processes written words (the Visual Word Form Area,) and yet I don’t assert that literacy is an instinct. For that matter, anything we think about has to be processed in our brains somewhere, whether instinct or not.
Better evidence comes from anthropology and zoology. According to Wikipedia, “All societies have a form of social status,” even hunter-gatherers. If something shows up in every single human society, that’s a pretty good sign that it is probably instinctual–and if it isn’t, it is so useful a thing that no society exists without it.
Among animals, social status is generally determined by a combination of physical dominance, age, relationship, and intelligence. Killer whale pods, for example, are led by the eldest female in the family; leadership in elephant herds is passed down from a deceased matriarch to her eldest daughter, even if the matriarch has surviving sisters. Male lions assert dominance by being larger and stronger than other lions.
In all of these cases, the social structure exists because it benefits the group, even if it harms some of the individuals in it. If having no social structure were beneficial for wolves, then wolf packs without alpha wolves would out-compete packs with alphas. This is the essence of natural selection.
Among humans, social status comes in two main forms, which I will call “earned” and “background.”
“Earned” social status stems from things you do, like rescuing people from burning buildings, inventing quantum physics, or stealing wallets. High status activities are generally things that benefit others, and low-status activities are generally those that harm others. This is why teachers are praised and thieves are put in prison.
Earned social status is a good thing, because it reward people for being helpful.
“Background” social status is basically stuff you were born into or have no effect over, like your race, gender, the part of the country you grew up in, your accent, name, family reputation, health/disability, etc.
Americans generally believe that you should not judge people based on background social status, but they do it, anyway.
Interestingly, high-status people are not generally violent. (Just compare crime rates by neighborhood SES.) Outside of military conquest, violence is the domain of the low-class and those afraid they are slipping in social class, not the high class. Compare Andrea Merkel to the average German far-right protester. Obviously the protester would win in a fist-fight, but Merkel is still in charge. High class people go out of their way to donate to charity, do volunteer work, and talk about how much they love refugees. In the traditional societies of the Pacific Northwest, they held potlatches at which they distributed accumulated wealth to their neighbors; in our society, the wealthy donate millionsto education. Ideally, in a well-functioning system, status is the thanks rich people get for doing things that benefit the community instead of spending their billions on gold-plated toilets.
The Arabian babbler … spends most of its life in small groups of three to 20 members. These groups lay their eggs in a communal nest and defend a small territory of trees and shrubs that provide much-needed safety from predators.
When it’s living as part of a group, a babbler does fairly well for itself. But babblers who get kicked out of a group have much bleaker prospects. These “non-territorials” are typically badgered away from other territories and forced out into the open, where they often fall prey to hawks, falcons, and other raptors. So it really pays to be part of a group. … Within a group, babblers assort themselves into a linear and fairly rigid dominance hierarchy, i.e., a pecking order. When push comes to shove, adult males always dominate adult females — but mostly males compete with males and females with females. Very occasionally, an intense “all-out” fight will erupt between two babblers of adjacent rank, typically the two highest-ranked males or the two highest-ranked females. …
Most of the time, however, babblers get along pretty well with each other. In fact, they spend a lot of effort actively helping one another and taking risks for the benefit of the group. They’ll often donate food to other group members, for example, or to the communal nestlings. They’ll also attack foreign babblers and predators who have intruded on the group’s territory, assuming personal risk in an effort to keep others safe. One particularly helpful activity is “guard duty,” in which one babbler stands sentinel at the top of a tree, watching for predators while the rest of the group scrounges for food. The babbler on guard duty not only foregoes food, but also assumes a greater risk of being preyed upon, e.g., by a hawk or falcon. …
Unlike chickens, who compete to secure more food and better roosting sites for themselves, babblers compete to give food away and to take the worst roosting sites. Each tries to be more helpful than the next. And because it’s a competition, higher-ranked (more dominant) babblers typically win, i.e., by using their dominance to interfere with the helpful activities of lower-ranked babblers. This competition is fiercest between babblers of adjacent rank. So the alpha male, for example, is especially eager to be more helpful than the beta male, but doesn’t compete nearly as much with the gamma male. Similar dynamics occur within the female ranks.
In the eighteenth and early nineteenth century, wealthy private individuals substantially supported the military, with a particular wealthy men buying stuff for a particular regiment or particular fort.
Noblemen paid high prices for military commands, and these posts were no sinecure. You got the obligation to substantially supply the logistics for your men, the duty to obey stupid orders that would very likely lead to your death, the duty to lead your men from in front while wearing a costume designed to make you particularly conspicuous, and the duty to engage in honorable personal combat, man to man, with your opposite number who was also leading his troops from in front.
A vestige of this tradition remains in that every English prince has been sent to war and has placed himself very much in harm’s way.
It seems obvious to me that a soldier being led by a member of the ruling class who is soaking up the bullets from in front is a lot more likely to be loyal and brave than a soldier sent into battle by distant rulers safely in Washington who despise him as a sexist homophobic racist murderer, that a soldier who sees his commander, a member of the ruling classes, fighting right in front of him, is reflexively likely to fight.
(Note, however, that magnanimity is not the same as niceness. The only people who are nice to everyone are store clerks and waitresses, and they’re only nice because they have to be or they’ll get fired.)
Most people are generally aware of each others’ social statuses, using contextual clues like clothing and accents to make quick, rough estimates. These contextual clues are generally completely neutral–they just happen to correlate with other behaviors.
For example, there is nothing objectively good or bad for society about wearing your pants belted beneath your buttocks, aside from it being an awkward way to wear your pants. But the style correlates with other behaviors, like crime, drug use, and aggression, low paternal investment, and unemployment, all of which are detrimental to society, and so the mere sight of underwear spilling out of a man’s pants automatically assigns him low status. There is nothing causal in this relationship–being a criminal does not make you bad at buckling your pants, nor does wearing your pants around your knees somehow inspire you to do drugs. But these things correlate, and humans are very good at learning patterns.
Likewise, there is nothing objectively better about operas than Disney movies, no real difference between a cup of coffee brewed in the microwave and one from Starbucks; a Harley Davidson and a Vespa are both motorcycles; and you can carry stuff around in just about any bag or backpack, but only the hoity-toity can afford something as objectively hideous as a $26,000 Louis Vutton backpack.
All of these things are fairly arbitrary and culturally dependent–the way you belt your pants can’t convey social status in a society where people don’t wear pants; your taste in movies couldn’t matter before movies were invented. Among hunter-gatherers, social status is based on things like one’s skills at hunting, and if I showed up to the next PTA meeting wearing a tophat and monocle, I wouldn’t get any status points at all.
We tend to aggregate the different social status markers into three broad classes (middle, upper, and lower.) As Scott Alexander says in his post about Siderea’s essay on class in America, which divides the US into 10% Underclass, 65% Working Class, 23.5% Gentry Class, and 1.5% Elite:
Siderea notes that Church’s analysis independently reached about the same conclusion as Paul Fussell’s famous guide. I’m not entirely sure how you’d judge this (everybody’s going to include lower, middle, and upper classes), but eyeballing Fussell it does look a lot like Church, so let’s grant this.
It also doesn’t sound too different from Marx. Elites sound like capitalists, Gentry like bourgeoisie, Labor like the proletariat, and the Underclass like the lumpenproletariat. Or maybe I’m making up patterns where they don’t exist; why should the class system of 21st century America be the same as that of 19th century industrial Europe?
There’s one more discussion of class I remember being influenced by, and that’s Unqualified Reservations’ Castes of the United States. Another one that you should read but that I’ll summarize in case you don’t:
1. Dalits are the underclass, … 2. Vaisyas are standard middle-class people … 3. Brahmins are very educated people … 4. Optimates are very rich WASPs … now they’re either extinct or endangered, having been pretty much absorbed into the Brahmins. …
Michael Church’s system (henceforth MC) and the Unqualified Reservation system (henceforth UR) are similar in some ways. MC’s Underclass matches Dalits, MC’s Labor matches Vaisyas, MC’s Gentry matches Brahmins, and MC’s Elite matches Optimates. This is a promising start. It’s a fourth independent pair of eyes that’s found the same thing as all the others. (commenters bring up Joel Kotkin and Archdruid Report as similar convergent perspectives).
I suspect the tendency to try to describe society as consisting of three broad classes (with the admission that other, perhaps tiny classes that don’t exactly fit into the others might exist) is actually just an artifact of being a three-biased society that likes to group things in threes (the Trinity, three-beat joke structure, three bears, Three Musketeers, three notes in a chord, etc.) This three-bias isn’t a human universal (or so I have read) but has probably been handed down to us from the Indo-Europeans, (“Many Indo-European societies know a threefold division of priests, a warriorclass, and a class of peasants or husbandmen. Georges Dumézil has suggested such a division for Proto-Indo-European society,”) so we’re so used to it that we don’t even notice ourselves doing it.
(For more information on our culture’s three-bias and different number biases in other cultures, see Alan Dundes’s Interpreting Folklore, though I should note that I read it back in highschool and so my memory of it is fuzzy.)
(Also, everyone is probably at least subconsciously cribbing Marx, who was probably cribbing from some earlier guy who cribbed from another earlier guy, who set out with the intention of demonstrating that society–divided into nobles, serfs, and villagers–reflected the Trinity, just like those Medieval maps that show the world divided into three parts or the conception of Heaven, Hell, and Purgatory.)
At any rate, I am skeptical of any system that lumps 65% of people into one social class and 0.5% of people into a different social class as being potentially too-finely grained at one end of the scale and not enough at the other. Determining the exact number of social classes in American society may ultimately be futile–perhaps there really are three (or four) highly distinct groups, or perhaps social classes transition smoothly from one to the next with no sharp divisions.
I lean toward the latter theory, with broad social classes as merely a convenient shorthand for extremely broad generalizations about society. If you look any closer, you tend to find that people do draw finer-grained distinctions between themselves and others than “65% Working Class” would imply. For example, a friend who works in agriculture in Greater Appalachia once referred dismissively to other people they had to deal with as “red necks.” I might not be able to tell what differentiates them, but clearly my friend could. Similarly, I am informed that there are different sorts of homelessness, from true street living to surviving in shelters, and that lifetime homeless people are a different breed altogether. I might call them all “homeless,” but to the homeless, these distinctions are important.
Is social class evil?
This question was suggested by a different friend.
I suspect that social class is basically, for the most part, neutral-to-useful. I base this on the fact that most people do not work very hard to erase markers of class distinction, but instead actively embrace particular class markers. (Besides, you can’t get rid of it, anyway.)
It is not all that hard to learn the norms and values of a different social class and strategically employ them. Black people frequently switch between speaking African American Vernacular English at home and standard English at work; I can discuss religion with Christian conservatives and malevolent AI risk with nerds; you can purchase a Harley Davidson t-shirt as easily as a French beret and scarf.
(I am reminded here of an experiment in which researchers were looking to document cab drivers refusing to pick up black passengers; they found that when the black passengers were dressed nicely, drivers would pick them up, but when they wore “ghetto” clothes, the cabs wouldn’t. Cabbies: responding more to perceived class than race.)
And yet, people don’t–for the most part–mass adopt the social markers of the upper class just to fool them. They love their motorcycle t-shirts, their pumpkin lattes, even their regional accents. Class markers are an important part of peoples’ cultural / tribal identities.
But what about class conflicts?
Because every class has its own norms and values, every class is, to some degree, disagreeing with the other classes. People for whom frugality and thrift are virtues will naturally think that people who drink overpriced coffee are lacking in moral character. People for whom anti-racism is the highest virtue will naturally think that Trump voters are despicable racists. A Southern Baptist sees atheists as morally depraved fetus murderers; nerds and jocks are famously opposed to each other; and people who believe that you should graduate from college, become established in your career, get married, and then have 0-1.5 children disapprove of people who drop out of highschool, have a bunch of children with a bunch of different people, and go on welfare.
A moderate sense of pride in one’s own culture is probably good and healthy, but spending too much energy hating other groups is probably negative–you may end up needlessly hurting people whose cooperation you would have benefited from, reducing everyone’s well-being.
(A good chunk of our political system’s dysfunctions are probably due to some social classes believing that other social classes despise them and are voting against their interests, and so counter-voting to screw over the first social class. I know at least one person who switched allegiance from Hillary to Trump almost entirely to stick it to liberals they think look down on them for classist reasons.)
Ultimately, though, social class is with us whether we like it or not. Even if a full generation of orphan children were raised with no knowledge of their origins and completely equal treatment by society at large, each would end up marrying/associating with people who have personalities similar to themselves (and remember that genetics plays a large role in personality.) Just as current social classes in America are ethnically different, (Southern whites are drawn from different European populations than Northern whites, for example,) so would the society resulting from our orphanage experiment differentiate into genetically and personalityish-similar groups.
Why do Americans generally proclaim their opposition to judging others based on background status, and then act classist, anyway? There are two main reasons.
As already discussed, different classes have real disagreements with each other. Even if I think I shouldn’t judge others, I can’t put aside my moral disgust at certain behaviors just because they happen to correlate with different classes.
It sounds good to say nice, magnanimous things that make you sound more socially sensitive and aware than others, like, “I wouldn’t hesitate to go out of my way to help someone in trouble.” So people like to say these things whether they really mean them or not.
In reality, people are far less magnanimous than they like to claim they are in front of their friends. People like to say that we should help the homeless and save the whales and feed all of the starving children in Africa, but few people actually go out of their way to do such things.
There is a reason Mother Teresa is considered a saint, not an archetype.
In real life, not only does magnanimity has a cost, (which the rich can better afford,) but if you don’t live up to your claims, people will notice. If you talk a good talk about loving others but actually mistreat them, people will decide that you’re a hypocrite. On the internet, you can post memes for free without havng to back them up with real action, causing discussions to descend into competitive-virtue signalling in which no one wants to be the first person to admit that they actually are occasionally self-interested. (Cory Doctorow has a relevant discussion about how “reputations economies”–especially internet-based ones–can go horribly wrong.)
Unfortunately, people often confuse background and achieved status.
American society officially has no hereditary social classes–no nobility, no professions limited legally to certain ethnicities, no serfs, no Dalits, no castes, etc. Officially, if you can do the job, you are supposed to get it.
Most of us believe, at least abstractly, that you shouldn’t judge or discriminate against others for background status factors they have no control over, like where they were born, the accent thy speak with, or their skin tone. If I have two resumes, one from someone named Lakeesha, and the other from someone named Ian William Esquire III, I am supposed to consider each on their merits, rather than the connotations their names invoke.
But because “status” is complicated, people often go beyond advocating against “background” status and also advocate that we shouldn’t accord social status for any reasons. That is, full social equality.
This is not possible and would be deeply immoral in practice.
When you need heart surgery, you really hope that the guy cutting you open is a top-notch heart surgeon. When you’re flying in an airplane, you hope that both the pilot and the guys who built the plane are highly skilled. Chefs must be good at cooking and authors good at writing.
These are all forms of earned status, and they are good.
Smart people are valuable to society because they do nice things like save you from heart attacks or invent cell-phones. This is not “winning at capitalism;” this is benefiting everyone around them. In this context, I’m happy to let smart people have high status.
In a hunter-gatherer society, smart people are the ones who know the most about where animals live and how to track them, how to get water during a drought, and where that 1-inch stem they spotted last season that means a tasty underground tuber is located. Among nomads, smart people are the ones with the biggest mental maps of the territory, the folks who know the safest and quickest routes from good summer pasture to good winter pasture, how to save an animal from dying and how to heal a sick person. Among pre-literate people, smart people composed epic poems that entertained their neighbors for many winters’ nights, and among literate ones, the smart people became scribes and accountants. Even the communists valued smart people, when they weren’t chopping their heads off for being bourgeois scum.
So even if we say, abstractly, “I value all people, no matter how smart they are,” the smart people do more of the stuff that benefits society than the dumb people, which means they end up with higher social status.
So, yes, high IQ is a high social status marker, and low IQ is a low social status marker, and thus at least some people will be snobs about signaling their IQ and their disdain for dumb people.
I am speaking here very abstractly. There are plenty of “high status” people who are not benefiting society at all. Plenty of people who use their status to destroy society while simultaneously enriching themselves. And yes, someone can come into a community, strip out all of its resources and leave behind pollution and unemployment, and happily call it “capitalism” and enjoy high status as a result.
I would be very happy if we could stop engaging in competitive holiness spirals and stop lionizing people who became wealthy by destroying communities. I don’t want capitalism at the expense of having a pleasant place to live in.
As we were discussing yesterday, I theorize that people have neural feedback loops that reward them for conforming/imitating others/obeying authorities and punish them for disobeying/not conforming.
This leads people to obey authorities or go along with groups even when they know, logically, that they shouldn’t.
There are certainly many situations in which we want people to conform even though they don’t want to, like when my kids have to go to bed or buckle their seatbelts–as I said yesterday, the feedback loop exists because it is useful.
But there are plenty of situations where we don’t want people to conform, like when trying to brainstorm new ideas.
Under what conditions will people disobey authority?
But in person, people may disobey authorities when they have some other social systtem to fall back on. If disobeying an authority in Society A means I lose social status in Society A, I will be more likely to disobey if I am a member in good standing in Society B.
If I can use my disobedience against Authority A as social leverage to increase my standing in Society B, then I am all the more likely to disobey. A person who can effectively stand up to an authority figure without getting punished must be, our brains reason, a powerful person, an authority in their own right.
Teenagers do this all the time, using their defiance against adults, school, teachers, and society in general to curry higher social status among other teenagers, the people they actually care about impressing.
SJWs do this, too:
I normally consider the president of Princeton an authority figure, and even though I probably disagree with him on far more political matters than these students do, I’d be highly unlikely to be rude to him in real life–especially if I were a student he could get expelled from college.
But if I had an outside audience–Society B–clapping and cheering for me behind the scenes, the urge to obey would be weaker. And if yelling at the President of Princeton could guarantee me high social status, approval, job offers, etc., then there’s a good chance I’d do it.
But then I got to thinking: Are there any circumstances under which these students would have accepted the president’s authority?
Obviously if the man had a proven track record of competently performing a particular skill the students wished to learn, they might follow hi example.
If authority works via neural feedback loops, employing some form of “mirror neurons,” do these systems activate more strongly when the people we are perceiving look more like ourselves (or our internalized notion of people in our “tribe” look like, since mirrors are a recent invention)?
In other words, what would a cross-racial version of the Milgram experiment look like?
Unfortunately, it doesn’t look like anyone has tried it (and to do it properly, it’d need to be a big experiment, involving several “scientists” of different races [so that the study isn’t biased by one “scientist” just being bad at projecting authority] interacting with dozens of students of different races, which would be a rather large undertaking.) I’m also not finding any studies on cross-racial authority (I did find plenty of websites offering practical advice about different groups’ leadership styles,) though I’m sure someone has studied it.
However, I did find cross-racial experiments on empathy, which may involve the same brain systems, and so are suggestive:
Using transcranial magnetic stimulation, we explored sensorimotor empathic brain responses in black and white individuals who exhibited implicit but not explicit ingroup preference and race-specific autonomic reactivity. We found that observing the pain of ingroup models inhibited the onlookers’ corticospinal system as if they were feeling the pain. Both black and white individuals exhibited empathic reactivity also when viewing the pain of stranger, very unfamiliar, violet-hand models. By contrast, no vicarious mapping of the pain of individuals culturally marked as outgroup members on the basis of their skin color was found. Importantly, group-specific lack of empathic reactivity was higher in the onlookers who exhibited stronger implicit racial bias.
Using the event-related potential (ERP) approach, we tracked the time-course of white participants’ empathic reactions to white (own-race) and black (other-race) faces displayed in a painful condition (i.e. with a needle penetrating the skin) and in a nonpainful condition (i.e. with Q-tip touching the skin). In a 280–340 ms time-window, neural responses to the pain of own-race individuals under needle penetration conditions were amplified relative to neural responses to the pain of other-race individuals displayed under analogous conditions.
In this study, we used functional magnetic resonance imaging (fMRI) to investigate how people perceive the actions of in-group and out-group members, and how their biased view in favor of own team members manifests itself in the brain. We divided participants into two teams and had them judge the relative speeds of hand actions performed by an in-group and an out-group member in a competitive situation. Participants judged hand actions performed by in-group members as being faster than those of out-group members, even when the two actions were performed at physically identical speeds. In an additional fMRI experiment, we showed that, contrary to common belief, such skewed impressions arise from a subtle bias in perception and associated brain activity rather than decision-making processes, and that this bias develops rapidly and involuntarily as a consequence of group affiliation. Our findings suggest that the neural mechanisms that underlie human perception are shaped by social context.
None of these studies shows definitevely whether or not in-group vs. out-group biases are an inherent feature of neurological systems, or Avenanti’s finding that people were more empathetic toward a purple-skinned person than to a member of a racial out-group suggests that some amount of learning is involved in the process–and that rather than comparing people against one’s in-group, we may be comparing them against our out-group.
At any rate, you may get similar outcomes either way.
In cases where you want to promote group cohesion and obedience, it may be beneficial to sort people by self-identity.
In cases where you want to guard against groupthink, obedience, or conformity, it may be beneficial to mix up the groups. Intellectual diversity is great, but even ethnic diversity may help people resist defaulting to obedience, especially when they know they shouldn’t.
Using data from two panel studies on U.S. firms and an online experiment, we examine investor reactions to increases in board diversity. Contrary to conventional wisdom, we find that appointing female directors has no impact on objective measures of performance, such as ROA, but does result in a systematic decrease in market value.
(Solal argues that investors may perceive the hiring of women–even competent ones–as a sign that the company is pursuing social justice goals instead of money-making goals and dump the stock.)
Additionally, diverse companies may find it difficult to work together toward a common goal–there is a good quantity of evidence that increasing diversity decreases trust and inhibits group cohesion. EG, from The downside of diversity:
IT HAS BECOME increasingly popular to speak of racial and ethnic diversity as a civic strength. From multicultural festivals to pronouncements from political leaders, the message is the same: our differences make us stronger.
But a massive new study, based on detailed interviews of nearly 30,000 people across America, has concluded just the opposite. Harvard political scientist Robert Putnam — famous for “Bowling Alone,” his 2000 book on declining civic engagement — has found that the greater the diversity in a community, the fewer people vote and the less they volunteer, the less they give to charity and work on community projects. In the most diverse communities, neighbors trust one another about half as much as they do in the most homogenous settings. The study, the largest ever on civic engagement in America, found that virtually all measures of civic health are lower in more diverse settings.
As usual, I suspect there is an optimum level of diversity–depending on a group’s purpose and its members’ preferences–that helps minimize groupthink while still preserving most of the benefits of cohesion.