A theory of male and female Sociopathy, pt 2

Note: this is just a theory, developed in reaction to recent conversations. 

As we were discussing Friday, one form of female sociopathy (at least relevant to this conversation) likely involves manipulating or coercing others into providing resources for her children.

There are a couple of obvious tropes:

  1. The evil stepmother, who shunts resources away from a man’s first child, toward his later children. 
  2. The cuckoldress, who tricks or convinces a man to care for another man’s children (this is not always seen as evil, since the male drive to provide for children is triggered at least partly by their proximity, since men cannot give birth, and thus men feel genuine affection for children who happen to be around them,)
  3. The crazy ex, who sues a man for all he is worth, doing her best to prevent him from being able to provide for any future children. 

How crazy are women? 

NSDUH_AMI-_2012_GRAPH_148270_2

22%–slightly more than 1 in 5–women have been diagnosed with a mental illness, at least according to all of the data I’ve seen. Since mental illness peaks during the childbearing ages and falls off quickly after menopause, we can also assume that this rate is closer to 1 in 4 during these years. 

(The dramatic problems our Native American communities are facing is a separate matter, deserving of its own post.)

The odd thing about this data is that mental illness rates are higher for women than men, despite the fact that mental retardation and mental disability rates are higher for men than women. Men are more likely than women to have serious conditions like non-verbal autism and schizophrenia, more likely to be homeless or commit suicide. When things go terribly wrong, the sufferers are disproportionately male (an unfortunate side effect of the Y chromosome causing greater male variability than female variability on a variety of traits.) 

So why on earth do more women than men suffer from mental illness? 

Perhaps some forms of mental illness confer some unexpected benefits on women. 

Many (perhaps most) “mental illnesses” correlate with a single personality trait–neuroticism

“Previously we thought that mental illnesses such as depression, schizophrenia, bipolar disorder, and substance abuse, were completely separate diseases,” Ystrøm says.

But research has now shown that these illnesses are often linked. If you suffer from one mental illness, you are more likely to develop another. And if someone in your immediate family has a psychiatric illness, your risk increases not only for this disorder, but for all other disorders.

These findings have led researchers to suspect that there could be a common underlying factor that increases an idividual’s risk of mental illness, overall. … 

Ystrøm and colleagues have used new statistical methods to look for patterns in personality, mental disorders, genes, and environmental factors, among the twins in the Twin Register. 

And the answer to the question the researchers asked is: yes, neuroticism seems to be the personality trait that best describes the risk of all mental disorders. …

“This one trait doesn’t explain everything. Anyone can develop a mental illness…”

And in women, neuroticism correlates with… more surviving offspring (in at least one study)

Taking an evolutionary approach, we use data from a contemporary polygynous high-fertility human population living in rural Senegal to investigate whether personality dimensions are associated with key life-history traits in humans, i.e., quantity and quality of offspring. We show that personality dimensions predict reproductive success differently in men and women in such societies and, in women, are associated with a trade-off between offspring quantity and quality. In women, neuroticism positively predicts the number of children, both between and within polygynous families. Furthermore, within the low social class, offspring quality (i.e., child nutritional status) decreases with a woman’s neuroticism, indicating a reproductive trade-off between offspring quantity and quality. 

What is neuroticism, in the Big 5 Personality Traits* sense? 

*Note: I am not endorsing or denying all five traits one way or another.

It’s worrying. Mothers who worry more about their offspring have more offspring–though it’s quite easy to imagine that the causality points in the opposite direction as the study’s authors conclude–poor women with lots of skinny babies have more reason to worry about their children than women with a few fat babies. 

When are women most likely to experience mental illness?

Immediately after the birth of a child. It’s called post-partum depression, and it can be very bad–one woman in my moms’ group ended up in the mental hospital after developing post-partum psychosis. Andrea Yates famously drowned her five children during a bout of post-partum depression/psychosis.

Why on earth would women develop a debilitating mental illness at the most vulnerable time in their offspring’s life? Wouldn’t natural selection select rather quickly against anything that makes women worse at taking care of their offspring? 

Let’s turn to everyone’s favorite genetic disease, sickle cell anemia. SCA is famous for being a relatively simple genetic mutation of the sort where if you have one copy of the sickle cell gene, you are less likely to get malaria, and if you have two copies, you tend to die. In areas where malaria is common, the cost of having a quarter of your children die from SCA is lower than the cost of loosing them to malaria. 

Personality traits, including neuroticism, generally exist on a continuum. People may become more neurotic when life warrants it, and less neurotic when they don’t need to worry. A mother with a new baby is in a very vulnerable state–she has just lost a good deal of blood, may not be able to walk, and has an infant to care for every other hour, day and night. It is not a normal state by any measure. It is a time when being extra attentive and extra aware of threats and predators is in a woman’s interest.

It is also a time when women are most in need of help from their mates, relatives, or other friends. Increased neuroticism may also prompt others to attend more closely to the new mother, helping her out. . Increased neuroticism may be so helpful during this time period that a few women getting way too much neuroticism and becoming extremely depressed or even killing their children is a cost outweighed by the increased survival of babies whose mothers had moderate levels of neuroticism. 

Let us note that nature doesn’t care about your feelings. Male praying mantises who allow themselves be eaten by their mates have more offspring than the ones who don’t, but that doesn’t mean male praying mantises enjoy getting eaten. Children who die of sickle

cell anemia don’t much appreciate that their siblings were protected from malaria, either.

An increase in neuroticism immediately after the birth of a baby may prompt a mother to take better care of it, but that doesn’t mean she enjoys the neuroticism. Neither does it mean that post-partum depression is healthy, any more than sickle cell anemia is healthy just because it’s a side effect of a trait that helps people avoid malaria. 

But wait, I have more studies!

Reproductive Fitness and Genetic Risk of Psychiatric Disorders in the General Population

The persistence of common, heritable psychiatric disorders that reduce reproductive fitness is an evolutionary paradox. Here, we investigate the selection pressures on sequence variants that predispose to schizophrenia, autism, bipolar disorder, major depression and attention deficit hyperactivity disorder (ADHD) using genomic data from 150,656 Icelanders, excluding those diagnosed with these psychiatric diseases. … Higher polygenic risk of autism is associated with fewer children and older age at first child whereas higher polygenic risk of ADHD is associated with having more children. We find no evidence for a selective advantage of a high polygenic risk of schizophrenia or bipolar disorder. Rare copy-number variants conferring moderate to high risk of psychiatric illness are associated with having fewer children and are under stronger negative selection pressure than common sequence variants. …

In summary, our results show that common sequence variants conferring risk of autism and ADHD are currently under weak selection in the general population of Iceland. However, rare CNVs that also impact cognition are under stronger selection pressure, consistent with mutation-selection balance. The hypothesis that a selective advantage accounts for the prevalence of sequence variants conferring risk of schizophrenia and bipolar disorder is unproven, but rather this empirical evidence suggests that common sequence variants largely escape selection as their individual effect sizes are weak.

Unfortunately, this study mostly looks at the data in aggregate, instead of breaking it down by males and females. (And I don’t know why they would bother excluding people who actually have the conditions they are trying to study, but perhaps it doesn’t make much difference.) 

Thankfully, they did break down the data by male/female in the tables–Table 1 and Table 2. These tables are confusing, but the takeaway is that mental illness has a bigger effect on male fertility than female fertility. 

Also: Fecundity of Patients with Schizophrenia, Autism, Bipolar Disorder, Depression, Anorexia Nervosa, or Substance Abuse vs. their Unffected Siblings

Results Except for women with depression, affected patients had significantly fewer children (FR range for those with psychiatric disorder, 0.23-0.93; P < 10−10). This reduction was consistently greater among men than women, suggesting that male fitness was particularly sensitive. Although sisters of patients with schizophrenia and bipolar disorder had increased fecundity (FR range, 1.02-1.03; P < .01), this was too small on its own to counterbalance the reduced fitness of affected patients. Brothers of patients with schizophrenia and autism showed reduced fecundity (FR range, 0.94-0.97; P < .001). Siblings of patients with depression and substance abuse had significantly increased fecundity (FR range, 1.01-1.05; P < 10−10). In the case of depression, this more than compensated for the lower fecundity of affected individuals.

Conclusions Our results suggest that strong selection exists against schizophrenia, autism, and anorexia nervosa and that these variants may be maintained by new mutations or an as-yet unknown mechanism. Bipolar disorder did not seem to be under strong negative selection. Vulnerability to depression, and perhaps substance abuse, may be preserved by balancing selection, suggesting the involvement of common genetic variants in ways that depend on other genes and on environment.

Now, this study gets interesting in its graphs: 

m_yoa120017f1
From Fecundity of Patients with Schizophrenia, Autism, Bipolar Disorder, Depression, Anorexia Nervosa, or Substance Abuse vs their Unaffected Siblings

In every case, mental illness has a bigger effect on male fertility than female–and in the case of depression, it has no effect on female fertility. 

But wait: 

m_yoa120017f2
Same source.

This graph is confusingly labeled, but it is breaking down the correlation on the brothers and sisters of people with mental disorders. So the first dot represents the brothers of people with schizophrenia; the second dot represents the sisters of people with schizophrenia. 

None of these effects are huge, and some of them changed when “comorbidities were included in the analysis,” though it’s not clear exactly what that means–the word comorbidity in this context refers to people with more than one diagnosis. 

For the objectives of this study, we first analyzed each disorder separately without accounting for comorbidities. A secondary analysis was then performed that corrected for comorbidities by analyzing all disorders simultaneously.

So when you analyze all of the disorders together, sisters of schizophrenics had no increased fertility, and neither did the siblings of people with bipolar. Depressed men had average fertility, while depressed women actually had slightly above average fertility. The results for anorexia, substance abuse, and autism didn’t change. 

And from Spain: Seven Dimensions of Personality Pathology are Under Sexual Selection in Modern Spain

Personality variation is increasingly thought to have an adaptive function. This is less clear for personality disorders (PDs)—extreme variants of personality that cause harm in most aspects of life. However, the possibility that PDs may be maintained in the population because of their advantages for fitness has been not convincingly tested. In a sample of 959 outpatients, we examined whether, and how, sexual selection acts on the seven main dimensions of personality pathology, taking into account mating success, reproductive success, and the mediating role of status. We find that, to varying extents, all personality dimensions are under sexual selection. Far from being predominantly purifying, selective forces push traits in diverging, often pathological, directions. These pressures differ moderately between the sexes. Sexual selection largely acts in males through the acquisition of wealth, and through the duration (rather than the number) of mates. This gives a reproductive advantage to males high in persistence–compulsivity. Conversely, because of the decoupling between the number of mates and offspring, the promiscuous strategy of psychopaths is not so successful. Negative emotionality, the most clinically detrimental trait, is slightly deleterious in males but is positively selected in females, which can help to preserve variation. 

It’s interesting that the invention of birth control may have inadvertently selected against promiscuous psychopaths–rather similar to the theory that abortion is responsible for the decrease in crime since the early 90s. 

“Negative emotionality” is likely equivalent to “neuroticism.”

There are two obvious reasons why mental illness might have more of an effect on males than females–one is that mental illness might simply be mores severe for males than females, on average. The second is that mental illness interferes more with holding down a job than with being a housewife, so women with mental illnesses have more options than men. 

Less obvious, though, is that some of these traits might actually be beneficial–in small quantities–for women.

That’s enough for now; let’s continue this discussion on Friday. (Wednesday is book club.) 

Advertisement

A theory of male and female Sociopathy pt 1

Note: this is just a theory, developed in reaction to recent conversations. 

Let us assume, first of all, that men and women have different optimal reproductive strategies, based on their different anatomy. In case you have not experienced birth yourself, it’s a difference of calories, time, and potential death. 

In the ancestral environment (before child support laws, abortion, birth control, or infant formula):

For men, the absolute minimal paternal investment in a child–immediate abandonment–involves a few minutes of effort and spoonful of semen. There are few dangers involved, except for the possibility of other males competing for the same female. A hypothetical man could, with very little strain or extra physical effort, father thousands of children–gay men regularly go through the physical motions of doing just that, and hardly seem exhausted by the effort.

For women, the absolute minimal parental investment is nine months of gestation followed by childbirth. This is calorically expensive, interferes with the mother’s ability to take care of her other children, and could kill her. A woman who tried to maximize her pregnancies from menarchy to menopause might produce 25 children. 

If a man abandons his children, there is a decent chance they will still survive, because they can be nursed by their mother; if a woman abandons her child, it is likely to die, because its father cannot lactate and so cannot feed it. 

In sum, for men, random procreative acts (ie, sex) are extremely low-cost and still have the potential to produce offspring. For women, random procreative acts are extremely costly. So men have an incentive to spread their sperm around and women have an incentive to be picky about when and with whom they reproduce.  

This is well known to, well, everyone. 

Now, obviously most men do not abandon their children (nor do most women.) It isn’t in their interest to do so. A man’s children are more likely to survive and do well in life if he invests in them. (In a few societies where paternity is really uncertain, men invest resources in their sisters’ children, who are at least related to them, rather than opting out altogether.) As far as I know, some amount of male input into their children or their sisters’ children is a human universal–the only variation is in how much. 

Men want to invest in their children because this helps their children succeed, but a few un-tended bastards here and there are not a major problem. Some of them might even survive. 

By contrast, women really don’t want to get saddled with bastards. 

We may define sociopathy, informally, as attempting to achieve evolutionary ends by means that harm others in society, eg, stealing. In this case, rape and child abandonment are sociopathic ways of increasing men’s reproductive success at the expense of other people. (Note that sociopathy doesn’t have a formal definition and I am using it here as a tool, not a real diagnosis. If someone has a better term, I’m happy to use it.)

This is, again, quite obvious–everyone knows that men are much more likely than women to be imprisoned for violent acts, rape included. Men are also more likely than women to try to skip out on their child support payments. 

Note that this “sociopathy” is not necessarily a mental illness, (a true illness ought to make a dent on one’s evolutionary success.) Genghis Khan raped a lot of women, and it turned out great for his genes. It is simply a reproductive strategy that harms other people. 

So what does female sociopathy look like? 

It can’t look like male sociopathy, because child abandonment decreases a woman’s fertility. For a woman, violence and abandonment would be signs of true mental defects. Rather, we want to look at ways women improve their chances of reproductive success at the expense of others. 

In other words, female sociopathy involves manipulating or coercing others into providing resources for her children. 

But it’s getting late; let’s continue with part 2 on Monday. (Wednesday is book club.)

Vacation Posting pt. 2: Brain Modules, Fertility, and Conspiracy

Sorry, I’ve been on vacation (and no, I do not like vacations.) This has interfered with my normal writing schedule, (it is now past 4 am) but here are the notes I managed to jot down in the car:

2. Brain Modules

This is relevant to my previous post on “The Modular Mind,” in which I proposed that people use a kind of compartmentalized or “modular” thinking to break down the complexity of life into manageable chunks. People can hold two beliefs at once that they think are “logical” but contradict each other because each belief is sort of “lodged in” a different module.

For example, Module 1 likes to think about Pensions. Mod 1 knows that pensions are paid for via current workers’ salaries, so we have to have enough future workers to fund future pension obligations.

Module 2 likes to think about the Environment. Mod 2 knows that we only have so many resources and that a growing population will quickly exhaust them, so we must reduce birthrates to save the environment.

Mod 1 then looks around and panics because, Oh no, there aren’t enough young people around to fund the pensions!

Mod 1 doesn’t bother to check in with Mod 2 about why there aren’t enough babies around. It just has some vague idea that people don’t want to make babies for some reason, so it goes and finds some people who do make babies and propose that we let more of them into the country.

Mod 1: Problem solved!

Mod 2: Oh no, look what all of those new people just did to our carbon footprint! We will have to reduce and conserve even more!

Modular thinking lets people process one problem very effectively, but interferes with seeing connections between those problems. In this case, they don’t see why mods 1 and 2 are working in opposition to each other.

From the outside–to someone who encounters both thoughts at once and so doesn’t process them separately–it makes no sense that someone could advocate both at once. They obviously contradict. Hence, outsiders tend to assume this contradiction is deliberate, caused by conspiracy, malice, or ill-will.

(Don’t worry, this is the last of the vacation posting.)

A Modest Educational Proposal

Source

Fellow humans, we have a problem. (And another problem.)

At least, this looks like a problem to me., especially when I’m trying to make conversation at the local moms group.

There are many potential reasons the data looks like this (including inaccuracy, though my lived experience says it is accurate.) Our culture encourages people to limit their fertility, and smart women are especially so encouraged. Smart people are also better at long-term planning and doing things like “reading the instructions on the birth control.”

But it seems likely that there is another factor, an arrow of causation pointing in the other direction: smart people tend to stay in school for longer, and people dislike having children while they are still in school. While you are in school, you are in some sense still a child, and we have a notion that children shouldn’t beget children.

Isaac Newton. Never married. Probably a virgin.

People who drop out of school and start having children at 16 tend not to be very smart and also tend to have plenty of children during their child-creating years. People who pursue post-docs into their thirties tend to be very smart–and many of them are virgins.

Now, I don’t know about you, but I kind of like having smart people around, especially the kinds of people who invent refrigerators and make supply chains work so I can enjoy eating food, even though I live in a city, far from any farms. I don’t want to live in a world where IQ is crashing and we can no longer maintain complex technological systems.

We need to completely re-think this system where the smarter you are, the longer you are expected to stay in school, accruing debt and not having children.

Proposal one: Accelerated college for bright students. Let any student who can do college-level work begin college level work for college credits, even if they are still in high (or middle) school. There are plenty of bright students out there who could be completing their degrees by 18.

The entirely framework of schooling probably ought to be sped up in a variety of ways, especially for bright students. The current framework often reflects the order in which various discoveries were made, rather than the age at which students are capable of learning the material. For example, negative numbers are apparently not introduced in the math curriculum until 6th grade, even though, in my experience, even kindergarteners are perfectly capable of understanding the concept of “debt.” If I promise to give you one apple tomorrow, then I have “negative one apple.” There is no need to hide the concept of negatives for 6 years.

Proposal two: More apprenticeship.

In addition to being costly and time-consuming, a college degree doesn’t even guarantee that your chosen field will still be hiring when you graduate. (I know people with STEM degrees who graduated right as the dot.com bubble burst. Ouch.) We essentially want our educational system to turn out people who are highly skilled at highly specialized trades, and capable of turning around and becoming highly skilled at another highly specialized trade on a dime if that doesn’t work out. This leads to chemists returning to university to get law degrees; physicists to go back for medical degrees. We want students to have both “broad educations” so they can get hired anywhere, and “deep educations” so they’ll actually be good at their jobs.

Imagine, instead, a system where highschool students are allowed to take a two-year course in preparation for a particular field, at the end of which high performers are accepted into an apprenticeship program where the continue learning on the job. At worst, these students would have a degree, income, and job experience by the age of 20, even if they decided they now wanted to switch professions or pursue an independent education.

Proposal three: Make childbearing normal for adult students.

There’s no reason college students can’t get married and have children (aside from, obviously, their lack of jobs and income.) College is not more time consuming or physically taxing than regular jobs, and college campuses tend to be pretty pleasant places. Studying while pregnant isn’t any more difficult than working while pregnant.

Grad students, in particular, are old and mature enough to get married and start families, and society should encourage them to do so.

Proposal four: stop denigrating child-rearing, especially for intelligent women.

Children are a lot of work, but they’re also fun. I love being with my kids. They are my family and an endless source of happiness.

What people want and value, they will generally strive to obtain.

 

These are just some ideas. What are yours?

Are “Nerds” Just a Hollywood Stereotype?

Yes, MIT has a football team.

The other day on Twitter, Nick B. Steves challenged me to find data supporting or refuting his assertion that Nerds vs. Jocks is a false stereotype, invented around 1975. Of course, we HBDers have a saying–“all stereotypes are true,” even the ones about us–but let’s investigate Nick’s claim and see where it leads us.

(NOTE: If you have relevant data, I’d love to see it.)

Unfortunately, terms like “nerd,” “jock,” and “chad” are not all that well defined. Certainly if we define “jock” as “athletic but not smart” and nerd as “smart but not athletic,” then these are clearly separate categories. But what if there’s a much bigger group of people who are smart and athletic?

Or what if we are defining “nerd” and “jock” too narrowly? Wikipedia defines nerd as, “a person seen as overly intellectual, obsessive, or lacking social skills.” I recall a study–which I cannot find right now–which found that nerds had, overall, lower-than-average IQs, but that study included people who were obsessive about things like comic books, not just people who majored in STEM. Similarly, should we define “jock” only as people who are good at sports, or do passionate sports fans count?

For the sake of this post, I will define “nerd” as “people with high math/science abilities” and “jock” as “people with high athletic abilities,” leaving the matter of social skills undefined. (People who merely like video games or watch sports, therefore, do not count.)

Nick is correct on one count: according to Wikipedia, although the word “nerd” has been around since 1951, it was popularized during the 70s by the sitcom Happy Days. However, Wikipedia also notes that:

An alternate spelling,[10] as nurd or gnurd, also began to appear in the mid-1960s or early 1970s.[11] Author Philip K. Dick claimed to have coined the nurd spelling in 1973, but its first recorded use appeared in a 1965 student publication at Rensselaer Polytechnic Institute.[12][13] Oral tradition there holds that the word is derived from knurd (drunk spelled backward), which was used to describe people who studied rather than partied. The term gnurd (spelled with the “g”) was in use at the Massachusetts Institute of Technology by 1965.[14] The term nurd was also in use at the Massachusetts Institute of Technology as early as 1971 but was used in the context for the proper name of a fictional character in a satirical “news” article.[15]

suggesting that the word was already common among nerds themselves before it was picked up by TV.

But we can trace the nerd-jock dichotomy back before the terms were coined: back in 1921, Lewis Terman, a researcher at Stanford University, began a long-term study of exceptionally high-IQ children, the Genetic Studies of Genius aka the Terman Study of the Gifted:

Terman’s goal was to disprove the then-current belief that gifted children were sickly, socially inept, and not well-rounded.

This belief was especially popular in a little nation known as Germany, where it inspired people to take schoolchildren on long hikes in the woods to keep them fit and the mass-extermination of Jews, who were believed to be muddying the German genepool with their weak, sickly, high-IQ genes (and nefariously trying to marry strong, healthy German in order to replenish their own defective stock.) It didn’t help that German Jews were both high-IQ and beset by a number of illnesses (probably related to high rates of consanguinity,) but then again, the Gypsies are beset by even more debilitating illnesses, but no one blames this on all of the fresh air and exercise afforded by their highly mobile lifestyles.

(Just to be thorough, though, the Nazis also exterminated the Gypsies and Hans Asperger’s subjects, despite Asperger’s insistence that they were very clever children who could probably be of great use to the German war effort via code breaking and the like.)

The results of Terman’s study are strongly in Nick’s favor. According to Psychology Today’s  account:

His final group of “Termites” averaged a whopping IQ of 151. Following-up his group 35-years later, his gifted group at mid-life definitely seemed to conform to his expectations. They were taller, healthier, physically better developed, and socially adept (dispelling the myth at the time of high-IQ awkward nerds).

According to Wikipedia:

…the first volume of the study reported data on the children’s family,[17] educational progress,[18] special abilities,[19] interests,[20] play,[21] and personality.[22] He also examined the children’s racial and ethnic heritage.[23] Terman was a proponent of eugenics, although not as radical as many of his contemporary social Darwinists, and believed that intelligence testing could be used as a positive tool to shape society.[3]

Based on data collected in 1921–22, Terman concluded that gifted children suffered no more health problems than normal for their age, save a little more myopia than average. He also found that the children were usually social, were well-adjusted, did better in school, and were even taller than average.[24] A follow-up performed in 1923–1924 found that the children had maintained their high IQs and were still above average overall as a group.

Of course, we can go back even further than Terman–in the early 1800s, allergies like hay fever were associated with the nobility, who of course did not do much vigorous work in the fields.

My impression, based on studies I’ve seen previously, is that athleticism and IQ are positively correlated. That is, smarter people tend to be more athletic, and more athletic people tend to be smarter. There’s a very obvious reason for this: our brains are part of our bodies, people with healthier bodies therefore also have healthier brains, and healthier brains tend to work better.

At the very bottom of the IQ distribution, mentally retarded people tend to also be clumsy, flacid, or lacking good muscle tone. The same genes (or environmental conditions) that make children have terrible health/developmental problems often also affect their brain growth, and conditions that affect their brains also affect their bodies. As we progress from low to average to above-average IQ, we encounter increasingly healthy people.

In most smart people, high-IQ doesn’t seem to be a random fluke, a genetic error, nor fitness reducing: in a genetic study of children with exceptionally high IQs, researchers failed to find many genes that specifically endowed the children with genius, but found instead a fortuitous absence of deleterious genes that knock a few points off the rest of us. The same genes that have a negative effect on the nerves and proteins in your brain probably also have a deleterious effect on the nerves and proteins throughout the rest of your body.

And indeed, there are many studies which show a correlation between intelligence and strength (eg, Longitudinal and Cross-Sectional Assessments of Age Changes in Physical Strength as Related to Sex, Social Class, and Mental Ability) or intelligence and overall health/not dying (eg, Intelligence in young adulthood and cause-specific mortality in the Danish Conscription Database (pdf) and The effects of occupation-based social position on mortality in a large American cohort.)

On the other hand, the evolutionary standard for “fitness” isn’t strength or longevity, but reproduction, and on this scale the high-IQ don’t seem to do as well:

Smart teens don’t have sex (or kiss much either): (h/t Gene Expresion)

Controlling for age, physical maturity, and mother’s education, a significant curvilinear relationship between intelligence and coital status was demonstrated; adolescents at the upper and lower ends of the intelligence distribution were less likely to have sex. Higher intelligence was also associated with postponement of the initiation of the full range of partnered sexual activities. … Higher intelligence operates as a protective factor against early sexual activity during adolescence, and lower intelligence, to a point, is a risk factor.

Source

Here we see the issue plainly: males at 120 and 130 IQ are less likely to get laid than clinically retarded men in 70s and 60s. The right side of the graph are “nerds”, the left side, “jocks.” Of course, the high-IQ females are even less likely to get laid than the high-IQ males, but males tend to judge themselves against other men, not women, when it comes to dating success. Since the low-IQ females are much less likely to get laid than the low-IQ males, this implies that most of these “popular” guys are dating girls who are smarter than themselves–a fact not lost on the nerds, who would also like to date those girls.

 In 2001, the MIT/Wellesley magazine Counterpart (Wellesley is MIT’s “sister school” and the two campuses allow cross-enrollment in each other’s courses) published a sex survey that provides a more detailed picture of nerd virginity:

I’m guessing that computer scientists invented polyamory, and neuroscientists are the chads of STEM. The results are otherwise pretty predictable.

Unfortunately, Counterpoint appears to be defunct due to lack of funding/interest and I can no longer find the original survey, but here is Jason Malloy’s summary from Gene Expression:

By the age of 19, 80% of US males and 75% of women have lost their virginity, and 87% of college students have had sex. But this number appears to be much lower at elite (i.e. more intelligent) colleges. According to the article, only 56% of Princeton undergraduates have had intercourse. At Harvard 59% of the undergraduates are non-virgins, and at MIT, only a slight majority, 51%, have had intercourse. Further, only 65% of MIT graduate students have had sex.

The student surveys at MIT and Wellesley also compared virginity by academic major. The chart for Wellesley displayed below shows that 0% of studio art majors were virgins, but 72% of biology majors were virgins, and 83% of biochem and math majors were virgins! Similarly, at MIT 20% of ‘humanities’ majors were virgins, but 73% of biology majors. (Apparently those most likely to read Darwin are also the least Darwinian!)

College Confidential has one paragraph from the study:

How Rolling Stone-ish are the few lucky souls who are doing the horizontal mambo? Well, not very. Considering all the non-virgins on campus, 41% of Wellesley and 32% of MIT students have only had one partner (figure 5). It seems that many Wellesley and MIT students are comfortingly monogamous. Only 9% of those who have gotten it on at MIT have been with more than 10 people and the number is 7% at Wellesley.

Someone needs to find the original study and PUT IT BACK ON THE INTERNET.

But this lack of early sexual success seems to translate into long-term marital happiness, once nerds find “the one.”Lex Fridman’s Divorce Rates by Profession offers a thorough list. The average divorce rate was 16.35%, with a high of 43% (Dancers) and a low of 0% (“Media and communication equipment workers.”)

I’m not sure exactly what all of these jobs are nor exactly which ones should count as STEM (veterinarian? anthropologists?) nor do I know how many people are employed in each field, but I count 49 STEM professions that have lower than average divorce rates (including computer scientists, economists, mathematical science, statisticians, engineers, biologists, chemists, aerospace engineers, astronomers and physicists, physicians, and nuclear engineers,) and only 23 with higher than average divorce rates (including electricians, water treatment plant operators, radio and telecommunication installers, broadcast engineers, and similar professions.) The purer sciences obviously had lower rates than the more practical applied tech fields.

The big outliers were mathematicians (19.15%), psychologists (19.26%), and sociologists (23.53%), though I’m not sure they count (if so, there were only 22 professions with higher than average divorce rates.)

I’m not sure which professions count as “jock” or “chad,” but athletes had lower than average rates of divorce (14.05%) as did firefighters, soldiers, and farmers. Financial examiners, hunters, and dancers, (presumably an athletic female occupation) however, had very high rates of divorce.

Medical Daily has an article on Who is Most Likely to Cheat? The Top 9 Jobs Unfaithful People Have (according to survey):

According to the survey recently taken by the “infidelity dating website,” Victoria Milan, individuals working in the finance field, such as brokers, bankers, and analysts, are more likely to cheat than those in any other profession. However, following those in finance comes those in the aviation field, healthcare, business, and sports.

With the exception of healthcare and maybe aviation, these are pretty typical Chad occupations, not STEM.

The Mirror has a similar list of jobs where people are most and least likely to be married. Most likely: Dentist, Chief Executive, Sales Engineer, Physician, Podiatrist, Optometrist, Farm product buyer, Precision grinder, Religious worker, Tool and die maker.

Least likely: Paper-hanger, Drilling machine operator, Knitter textile operator, Forge operator, Mail handler, Science technician, Practical nurse, Social welfare clerk, Winding machine operative, Postal clerk.

I struggled to find data on male fertility by profession/education/IQ, but there’s plenty on female fertility, eg the deceptively titled High-Fliers have more Babies:

…American women without any form of high-school diploma have a fertility rate of 2.24 children. Among women with a high-school diploma the fertility rate falls to 2.09 and for women with some form of college education it drops to 1.78.

However, among women with college degrees, the economists found the fertility rate rises to 1.88 and among women with advanced degrees to 1.96. In 1980 women who had studied for 16 years or more had a fertility rate of just 1.2.

As the economists prosaically explain: “The relationship between fertility and women’s education in the US has recently become U-shaped.”

Here is another article about the difference in fertility rates between high and low-IQ women.

But female fertility and male fertility may not be the same–I recall data elsewhere indicating that high-IQ men have more children than low IQ men, which implies those men are having their children with low-IQ women. (For example, while Bill and Hillary seem about matched on IQ, and have only one child, Melania Trump does not seem as intelligent as Trump, who has five children.)

Amusingly, I did find data on fertility rate by father’s profession for 1920, in the Birth Statistics for the Birth Registration Area of the US:

Of the 1,508,874 children born in 1920 in the birth registration area of the United states, occupations of fathers are stated for … 96.9%… The average number of children ever born to the present wives of these occupied fathers is 3.3 and the average number of children living 2.9.

The average number of children ever born ranges from 4.6 for foremen, overseers, and inspectors engaged in the extraction of minerals to 1.8 for soldiers, sailors, and marines. Both of these extreme averages are easily explained, for soldier, sailors and marines are usually young, while such foremen, overseers, and inspectors are usually in middle life. For many occupations, however, the ages of the fathers are presumably about the same and differences shown indicate real differences in the size of families. For example, the low figure for dentists, (2), architects, (2.1), and artists, sculptors, and teachers of art (2.2) are in striking contrast with the figure for mine operatives (4.3), quarry operatives (4.1) bootblacks, and brick and stone masons (each 3.9). …

As a rule the occupations credited with the highest number of children born are also credited with the highest number of children living, the highest number of children living appearing for foremen, overseers, and inspectors engaged in the extraction of minerals (3.9) and for steam and street railroad foremen and overseer (3.8), while if we exclude groups plainly affected by the age of fathers, the highest number of children living appear for mine and quarry operatives (each 3.6).

Obviously the job market was very different in 1920–no one was majoring in computer science. Perhaps some of those folks who became mine and quarry operatives back then would become engineers today–or perhaps not. Here are the average numbers of surviving children for the most obviously STEM professions (remember average for 1920 was 2.9):

Electricians 2.1, Electrotypers 2.2, telegraph operator 2.2, actors 1.9, chemists 1.8, Inventors 1.8, photographers and physicians 2.1, technical engineers 1.9, veterinarians 2.2.

I don’t know what paper hangers do, but the Mirror said they were among the least likely to be married, and in 1920, they had an average of 3.1 children–above average.

What about athletes? How smart are they?

Athletes Show Huge Gaps on SAT Scores” is not a promising title for the “nerds are athletic” crew.

The Journal-Constitution studied 54 public universities, “including the members of the six major Bowl Championship Series conferences and other schools whose teams finished the 2007-08 season ranked among the football or men’s basketball top 25.”…

  • Football players average 220 points lower on the SAT than their classmates. Men’s basketball was 227 points lower.
  • University of Florida won the prize for biggest gap between football players and the student body, with players scoring 346 points lower than their peers.
  • Georgia Tech had the nation’s best average SAT score for football players, 1028 of a possible 1600, and best average high school GPA, 3.39 of a possible 4.0. But because its student body is apparently very smart, Tech’s football players still scored 315 SAT points lower than their classmates.
  • UCLA, which has won more NCAA championships in all sports than any other school, had the biggest gap between the average SAT scores of athletes in all sports and its overall student body, at 247 points.

From the original article, which no longer seems to be up on the Journal-Constitution website:

All 53 schools for which football SAT scores were available had at least an 88-point gap between team members’ average score and the average for the student body. …

Football players performed 115 points worse on the SAT than male athletes in other sports.

The differences between athletes’ and non-athletes’ SAT scores were less than half as big for women (73 points) as for men (170).

Many schools routinely used a special admissions process to admit athletes who did not meet the normal entrance requirements. … At Georgia, for instance, 73.5 percent of athletes were special admits compared with 6.6 percent of the student body as a whole.

On the other hand, as Discover Magazine discusses in “The Brain: Why Athletes are Geniuses,” athletic tasks–like catching a fly ball or slapping a hockey puck–require exceptionally fast and accurate brain signals to trigger the correct muscle movements.

Ryan Stegal studied the GPAs of highschool student athletes vs. non-athletes and found that the athletes had higher average GPAs than the non-athletes, but he also notes that the athletes were required to meet certain minimum GPA requirements in order to play.

But within athletics, it looks like the smarter athletes perform better than dumber ones, which is why the NFL uses the Wonderlic Intelligence Test:

NFL draft picks have taken the Wonderlic test for years because team owners need to know if their million dollar player has the cognitive skills to be a star on the field.

What does the NFL know about hiring that most companies don’t? They know that regardless of the position, proof of intelligence plays a profound role in the success of every individual on the team. It’s not enough to have physical ability. The coaches understand that players have to be smart and think quickly to succeed on the field, and the closer they are to the ball the smarter they need to be. That’s why, every potential draft pick takes the Wonderlic Personnel Test at the combine to prove he does–or doesn’t—have the brains to win the game. …

The first use of the WPT in the NFL was by Tom Landry of the Dallas Cowboys in the early 70s, who took a scientific approach to finding players. He believed players who could use their minds where it counted had a strategic advantage over the other teams. He was right, and the test has been used at the combine ever since.

For the NFL, years of testing shows that the higher a player scores on the Wonderlic, the more likely he is to be in the starting lineup—for any position. “There is no other reasonable explanation for the difference in test scores between starting players and those that sit on the bench,” Callans says. “Intelligence plays a role in how well they play the game.”

Let’s look at Exercising Intelligence: How Research Shows a Link Between Physical Activity and Smarts:

A large study conducted at the Sahlgrenska Academy and Sahlgrenska University Hospital in Gothenburg, Sweden, reveals that young adults who regularly exercise have higher IQ scores and are more likely to go on to university.

The study was published in the Proceedings of the National Academy of Sciences (PNAS), and involved more than 1.2 million Swedish men. The men were performing military service and were born between the years 1950 and 1976. Both their physical and IQ test scores were reviewed by the research team. …

The researchers also looked at data for twins and determined that primarily environmental factors are responsible for the association between IQ and fitness, and not genetic makeup. “We have also shown that those youngsters who improve their physical fitness between the ages of 15 and 18 increase their cognitive performance.”…

I have seen similar studies before, some involving mice and some, IIRC, the elderly. It appears that exercise is probably good for you.

I have a few more studies I’d like to mention quickly before moving on to discussion.

Here’s Grip Strength and Physical Demand of Previous Occupation in a Well-Functioning Cohort of Chinese Older Adults (h/t prius_1995) found that participants who had previously worked in construction had greater grip strength than former office workers.

Age and Gender-Specific Normative Data of Grip and Pinch Strength in a Healthy Adult Swiss Population (h/t prius_1995).

 

If the nerds are in the sedentary cohort, then they be just as athletic if not more athletic than all of the other cohorts except the heavy work.

However, in Revised normative values for grip strength with the Jamar dynamometer, the authors found no effect of profession on grip strength.

And Isometric muscle strength and anthropometric characteristics of a Chinese sample (h/t prius_1995).

And Pumpkin Person has an interesting post about brain size vs. body size.

 

Discussion: Are nerds real?

Overall, it looks like smarter people are more athletic, more athletic people are smarter, smarter athletes are better athletes, and exercise may make you smarter. For most people, the nerd/jock dichotomy is wrong.

However, there is very little overlap at the very highest end of the athletic and intelligence curves–most college (and thus professional) athletes are less intelligent than the average college student, and most college students are less athletic than the average college (and professional) athlete.

Additionally, while people with STEM degrees make excellent spouses (except for mathematicians, apparently,) their reproductive success is below average: they have sex later than their peers and, as far as the data I’ve been able to find shows, have fewer children.

Stephen Hawking

Even if there is a large overlap between smart people and athletes, they are still separate categories selecting for different things: a cripple can still be a genius, but can’t play football; a dumb person can play sports, but not do well at math. Stephen Hawking can barely move, but he’s still one of the smartest people in the world. So the set of all smart people will always include more “stereotypical nerds” than the set of all athletes, and the set of all athletes will always include more “stereotypical jocks” than the set of all smart people.

In my experience, nerds aren’t socially awkward (aside from their shyness around women.) The myth that they are stems from the fact that they have different interests and communicate in a different way than non-nerds. Let nerds talk to other nerds, and they are perfectly normal, communicative, socially functional people. Put them in a room full of non-nerds, and suddenly the nerds are “awkward.”

Unfortunately, the vast majority of people are not nerds, so many nerds have to spend the majority of their time in the company of lots of people who are very different than themselves. By contrast, very few people of normal IQ and interests ever have to spend time surrounded by the very small population of nerds. If you did put them in a room full of nerds, however, you’d find that suddenly they don’t fit in. The perception that nerds are socially awkward is therefore just normie bias.

Why did the nerd/jock dichotomy become so popular in the 70s? Probably in part because science and technology were really taking off as fields normal people could aspire to major in, man had just landed on the moon and the Intel 4004 was released in 1971.  Very few people went to college or were employed in sciences back in 1920; by 1970, colleges were everywhere and science was booming.

And at the same time, colleges and highschools were ramping up their athletics programs. I’d wager that the average school in the 1800s had neither PE nor athletics of any sort. To find those, you’d probably have to attend private academies like Andover or Exeter. By the 70s, though, schools were taking their athletics programs–even athletic recruitment–seriously.

How strong you felt the dichotomy probably depends on the nature of your school. I have attended schools where all of the students were fairly smart and there was no anti-nerd sentiment, and I have attended schools where my classmates were fiercely anti-nerd and made sure I knew it.

But the dichotomy predates the terminology. Take Superman, first 1938. His disguise is a pair of glasses, because no one can believe that the bookish, mild-mannered, Clark Kent is actually the super-strong Superman. Batman is based on the character of El Zorro, created in 1919. Zorro is an effete, weak, foppish nobleman by day and a dashing, sword-fighting hero of the poor by night. Of course these characters are both smart and athletic, but their disguises only work because others do not expect them to be. As fantasies, the characters are powerful because they provide a vehicle for our own desires: for our everyday normal failings to be just a cover for how secretly amazing we are.

But for the most part, most smart people are perfectly fit, healthy, and coordinated–even the ones who like math.

 

Existential Caprine

Once

You were

Wild.

Sure,

There were predators

The lions could be confusing

But you were free

goat painting, Herculaneum

Then came men

Faster, smarter than lions

They killed the wolves

Brought you food

(The bread of slavery, they say, is far sweeter than the bread of freedom.)

And shelter

Children were born, safe from wolves, hunger, or cold

and you grew used to man.

Centuries passed

And it seemed you outnumbered the stars

Perhaps your sons disappeared

But was it worse than wolves?

You could almost forget you were once wild

Could you return to the mountains, even if you wanted to?

And as they lead you away

You ask

Did I ever have a choice?

 

To explain: The process of domestication is fascinating. Some animals, like wolves, began associating with humans because they could pick up our scraps. Others, like cats, began living in our cities because they liked eating the vermin we attracted. (You might say the mice, too, are domesticated.) These relationships are obviously mutually beneficial (aside from the mice.)

The animals we eat, though, have a different–more existential–story.

Humans increased the number of wild goats and sheep available for them to eat by eliminating competing predators, like wolves and lions. We brought them food in the winter, built them shelters to keep them warm in the winter, and led them to the best pastures. As a result, their numbers increased.

But, of course, we eat them.

From the goat’s perspective, is it worth it?

There’s a wonderful metaphor in the Bible, enacted every Passover: matzoh.

If you’ve never had it, matzoh tastes like saltines, only worse. It’s the bread of freedom, hastily thrown on the fire, hastily thrown on the fire and carried away.

The bread of slavery tastes delicious. The bread of freedom tastes awful.

1And they took their journey from Elim, and all the congregation of the children of Israel came unto the wilderness of Sin, which is between Elim and Sinai, on the fifteenth day of the second month after their departing out of the land of Egypt. 2And the whole congregation of the children of Israel murmured against Moses and Aaron in the wilderness: 3And the children of Israel said unto them, Would to God we had died by the hand of the LORD in the land of Egypt, when we sat by the flesh pots, and when we did eat bread to the full… Exodus 16

Even if the goats didn’t want to be domesticated, hated it and fought against it, did they have any choice? If the domesticated goats have more surviving children than wild ones, then goats will become domesticated. It’s a simple matter of numbers:

Total Fertility Rate by Country: Purple = 7 children per woman; Blue = 1 child per woman

The future belongs to those who show up.

Which future do you chose?

Evolution is slow–until it’s fast: Genetic Load and the Future of Humanity

Source: Priceonomics

A species may live in relative equilibrium with its environment, hardly changing from generation to generation, for millions of years. Turtles, for example, have barely changed since the Cretaceous, when dinosaurs still roamed the Earth.

But if the environment changes–critically, if selective pressures change–then the species will change, too. This was most famously demonstrated with English moths, which changed color from white-and-black speckled to pure black when pollution darkened the trunks of the trees they lived on. To survive, these moths need to avoid being eaten by birds, so any moth that stands out against the tree trunks tends to get turned into an avian snack. Against light-colored trees, dark-colored moths stood out and were eaten. Against dark-colored trees, light-colored moths stand out.

This change did not require millions of years. Dark-colored moths were virtually unknown in 1810, but by 1895, 98% of the moths were black.

The time it takes for evolution to occur depends simply on A. The frequency of a trait in the population and B. How strongly you are selecting for (or against) it.

Let’s break this down a little bit. Within a species, there exists a great deal of genetic variation. Some of this variation happens because two parents with different genes get together and produce offspring with a combination of their genes. Some of this variation happens because of random errors–mutations–that occur during copying of the genetic code. Much of the “natural variation” we see today started as some kind of error that proved to be useful, or at least not harmful. For example, all humans originally had dark skin similar to modern Africans’, but random mutations in some of the folks who no longer lived in Africa gave them lighter skin, eventually producing “white” and “Asian” skin tones.

(These random mutations also happen in Africa, but there they are harmful and so don’t stick around.)

Natural selection can only act on the traits that are actually present in the population. If we tried to select for “ability to shoot x-ray lasers from our eyes,” we wouldn’t get very far, because no one actually has that mutation. By contrast, albinism is rare, but it definitely exists, and if for some reason we wanted to select for it, we certainly could. (The incidence of albinism among the Hopi Indians is high enough–1 in 200 Hopis vs. 1 in 20,000 Europeans generally and 1 in 30,000 Southern Europeans–for scientists to discuss whether the Hopi have been actively selecting for albinism. This still isn’t a lot of albinism, but since the American Southwest is not a good environment for pale skin, it’s something.)

You will have a much easier time selecting for traits that crop up more frequently in your population than traits that crop up rarely (or never).

Second, we have intensity–and variety–of selective pressure. What % of your population is getting removed by natural selection each year? If 50% of your moths get eaten by birds because they’re too light, you’ll get a much faster change than if only 10% of moths get eaten.

Selection doesn’t have to involve getting eaten, though. Perhaps some of your moths are moth Lotharios, seducing all of the moth ladies with their fuzzy antennae. Over time, the moth population will develop fuzzier antennae as these handsome males out-reproduce their less hirsute cousins.

No matter what kind of selection you have, nor what part of your curve it’s working on, all that ultimately matters is how many offspring each individual has. If white moths have more children than black moths, then you end up with more white moths. If black moths have more babies, then you get more black moths.

Source SUPS.org

So what happens when you completely remove selective pressures from a population?

Back in 1968, ethologist John B. Calhoun set up an experiment popularly called “Mouse Utopia.” Four pairs of mice were given a large, comfortable habitat with no predators and plenty of food and water.

Predictably, the mouse population increased rapidly–once the mice were established in their new homes, their population doubled every 55 days. But after 211 days of explosive growth, reproduction began–mysteriously–to slow. For the next 245 days, the mouse population doubled only once every 145 days.

The birth rate continued to decline. As births and death reached parity, the mouse population stopped growing. Finally the last breeding female died, and the whole colony went extinct.

source

As I’ve mentioned before Israel is (AFAIK) the only developed country in the world with a TFR above replacement.

It has long been known that overcrowding leads to population stress and reduced reproduction, but overcrowding can only explain why the mouse population began to shrink–not why it died out. Surely by the time there were only a few breeding pairs left, things had become comfortable enough for the remaining mice to resume reproducing. Why did the population not stabilize at some comfortable level?

Professor Bruce Charlton suggests an alternative explanation: the removal of selective pressures on the mouse population resulted in increasing mutational load, until the entire population became too mutated to reproduce.

What is genetic load?

As I mentioned before, every time a cell replicates, a certain number of errors–mutations–occur. Occasionally these mutations are useful, but the vast majority of them are not. About 30-50% of pregnancies end in miscarriage (the percent of miscarriages people recognize is lower because embryos often miscarry before causing any overt signs of pregnancy,) and the majority of those miscarriages are caused by genetic errors.

Unfortunately, randomly changing part of your genetic code is more likely to give you no skin than skintanium armor.

But only the worst genetic problems that never see the light of day. Plenty of mutations merely reduce fitness without actually killing you. Down Syndrome, famously, is caused by an extra copy of chromosome 21.

While a few traits–such as sex or eye color–can be simply modeled as influenced by only one or two genes, many traits–such as height or IQ–appear to be influenced by hundreds or thousands of genes:

Differences in human height is 60–80% heritable, according to several twin studies[19] and has been considered polygenic since the Mendelian-biometrician debate a hundred years ago. A genome-wide association (GWA) study of more than 180,000 individuals has identified hundreds of genetic variants in at least 180 loci associated with adult human height.[20] The number of individuals has since been expanded to 253,288 individuals and the number of genetic variants identified is 697 in 423 genetic loci.[21]

Obviously most of these genes each plays only a small role in determining overall height (and this is of course holding environmental factors constant.) There are a few extreme conditions–gigantism and dwarfism–that are caused by single mutations, but the vast majority of height variation is caused by which particular mix of those 700 or so variants you happen to have.

The situation with IQ is similar:

Intelligence in the normal range is a polygenic trait, meaning it’s influenced by more than one gene.[3][4]

The general figure for the heritability of IQ, according to an authoritative American Psychological Association report, is 0.45 for children, and rises to around 0.75 for late teens and adults.[5][6] In simpler terms, IQ goes from being weakly correlated with genetics, for children, to being strongly correlated with genetics for late teens and adults. … Recent studies suggest that family and parenting characteristics are not significant contributors to variation in IQ scores;[8] however, poor prenatal environment, malnutrition and disease can have deleterious effects.[9][10]

And from a recent article published in Nature Genetics, Genome-wide association meta-analysis of 78,308 individuals identifies new loci and genes influencing human intelligence:

Despite intelligence having substantial heritability2 (0.54) and a confirmed polygenic nature, initial genetic studies were mostly underpowered3, 4, 5. Here we report a meta-analysis for intelligence of 78,308 individuals. We identify 336 associated SNPs (METAL P < 5 × 10−8) in 18 genomic loci, of which 15 are new. Around half of the SNPs are located inside a gene, implicating 22 genes, of which 11 are new findings. Gene-based analyses identified an additional 30 genes (MAGMA P < 2.73 × 10−6), of which all but one had not been implicated previously. We show that the identified genes are predominantly expressed in brain tissue, and pathway analysis indicates the involvement of genes regulating cell development (MAGMA competitive P = 3.5 × 10−6). Despite the well-known difference in twin-based heritability2 for intelligence in childhood (0.45) and adulthood (0.80), we show substantial genetic correlation (rg = 0.89, LD score regression P = 5.4 × 10−29). These findings provide new insight into the genetic architecture of intelligence.

The greater number of genes influence a trait, the harder they are to identify without extremely large studies, because any small group of people might not even have the same set of relevant genes.

High IQ correlates positively with a number of life outcomes, like health and longevity, while low IQ correlates with negative outcomes like disease, mental illness, and early death. Obviously this is in part because dumb people are more likely to make dumb choices which lead to death or disease, but IQ also correlates with choice-free matters like height and your ability to quickly press a button. Our brains are not some mysterious entities floating in a void, but physical parts of our bodies, and anything that affects our overall health and physical functioning is likely to also have an effect on our brains.

Like height, most of the genetic variation in IQ is the combined result of many genes. We’ve definitely found some mutations that result in abnormally low IQ, but so far we have yet (AFAIK) to find any genes that produce the IQ gigantism. In other words, low (genetic) IQ is caused by genetic load–Small Yet Important Genetic Differences Between Highly Intelligent People and General Population:

The study focused, for the first time, on rare, functional SNPs – rare because previous research had only considered common SNPs and functional because these are SNPs that are likely to cause differences in the creation of proteins.

The researchers did not find any individual protein-altering SNPs that met strict criteria for differences between the high-intelligence group and the control group. However, for SNPs that showed some difference between the groups, the rare allele was less frequently observed in the high intelligence group. This observation is consistent with research indicating that rare functional alleles are more often detrimental than beneficial to intelligence.

Maternal mortality rates over time, UK data

Greg Cochran has some interesting Thoughts on Genetic Load. (Currently, the most interesting candidate genes for potentially increasing IQ also have terrible side effects, like autism, Tay Sachs and Torsion Dystonia. The idea is that–perhaps–if you have only a few genes related to the condition, you get an IQ boost, but if you have too many, you get screwed.) Of course, even conventional high-IQ has a cost: increased maternal mortality (larger heads).

Wikipedia defines genetic load as:

the difference between the fitness of an average genotype in a population and the fitness of some reference genotype, which may be either the best present in a population, or may be the theoretically optimal genotype. … Deleterious mutation load is the main contributing factor to genetic load overall.[5] Most mutations are deleterious, and occur at a high rate.

There’s math, if you want it.

Normally, genetic mutations are removed from the population at a rate determined by how bad they are. Really bad mutations kill you instantly, and so are never born. Slightly less bad mutations might survive, but never reproduce. Mutations that are only a little bit deleterious might have no obvious effect, but result in having slightly fewer children than your neighbors. Over many generations, this mutation will eventually disappear.

(Some mutations are more complicated–sickle cell, for example, is protective against malaria if you have only one copy of the mutation, but gives you sickle cell anemia if you have two.)

Jakubany is a town in the Carpathian Mountains

Throughout history, infant mortality was our single biggest killer. For example, here is some data from Jakubany, a town in the Carpathian Mountains:

We can see that, prior to the 1900s, the town’s infant mortality rate stayed consistently above 20%, and often peaked near 80%.

The graph’s creator states:

When I first ran a calculation of the infant mortality rate, I could not believe certain of the intermediate results. I recompiled all of the data and recalculated … with the same astounding result – 50.4% of the children born in Jakubany between the years 1772 and 1890 would diebefore reaching ten years of age! …one out of every two! Further, over the same 118 year period, of the 13306 children who were born, 2958 died (~22 %) before reaching the age of one.

Historical infant mortality rates can be difficult to calculate in part because they were so high, people didn’t always bother to record infant deaths. And since infants are small and their bones delicate, their burials are not as easy to find as adults’. Nevertheless, Wikipedia estimates that Paleolithic man had an average life expectancy of 33 years:

Based on the data from recent hunter-gatherer populations, it is estimated that at 15, life expectancy was an additional 39 years (total 54), with a 0.60 probability of reaching 15.[12]

Priceonomics: Why life expectancy is misleading

In other words, a 40% chance of dying in childhood. (Not exactly the same as infant mortality, but close.)

Wikipedia gives similarly dismal stats for life expectancy in the Neolithic (20-33), Bronze and Iron ages (26), Classical Greece(28 or 25), Classical Rome (20-30), Pre-Columbian Southwest US (25-30), Medieval Islamic Caliphate (35), Late Medieval English Peerage (30), early modern England (33-40), and the whole world in 1900 (31).

Over at ThoughtCo: Surviving Infancy in the Middle Ages, the author reports estimates for between 30 and 50% infant mortality rates. I recall a study on Anasazi nutrition which I sadly can’t locate right now, which found 100% malnutrition rates among adults (based on enamel hypoplasias,) and 50% infant mortality.

As Priceonomics notes, the main driver of increasing global life expectancy–48 years in 1950 and 71.5 years in 2014 (according to Wikipedia)–has been a massive decrease in infant mortality. The average life expectancy of an American newborn back in 1900 was only 47 and a half years, whereas a 60 year old could expect to live to be 75. In 1998, the average infant could expect to live to about 75, and the average 60 year old could expect to live to about 80.

Back in his post on Mousetopia, Charlton writes:

Michael A Woodley suggests that what was going on [in the Mouse experiment] was much more likely to be mutation accumulation; with deleterious (but non-fatal) genes incrementally accumulating with each generation and generating a wide range of increasingly maladaptive behavioural pathologies; this process rapidly overwhelming and destroying the population before any beneficial mutations could emerge to ‘save; the colony from extinction. …

The reason why mouse utopia might produce so rapid and extreme a mutation accumulation is that wild mice naturally suffer very high mortality rates from predation. …

Thus mutation selection balance is in operation among wild mice, with very high mortality rates continually weeding-out the high rate of spontaneously-occurring new mutations (especially among males) – with typically only a small and relatively mutation-free proportion of the (large numbers of) offspring surviving to reproduce; and a minority of the most active and healthy (mutation free) males siring the bulk of each generation.

However, in Mouse Utopia, there is no predation and all the other causes of mortality (eg. Starvation, violence from other mice) are reduced to a minimum – so the frequent mutations just accumulate, generation upon generation – randomly producing all sorts of pathological (maladaptive) behaviours.

Historically speaking, another selective factor operated on humans: while about 67% of women reproduced, only 33% of men did. By contrast, according to Psychology Today, a majority of today’s men have or will have children.

Today, almost everyone in the developed world has plenty of food, a comfortable home, and doesn’t have to worry about dying of bubonic plague. We live in humantopia, where the biggest factor influencing how many kids you have is how many you want to have.

source

Back in 1930, infant mortality rates were highest among the children of unskilled manual laborers, and lowest among the children of professionals (IIRC, this is Brittish data.) Today, infant mortality is almost non-existent, but voluntary childlessness has now inverted this phenomena:

Yes, the percent of childless women appears to have declined since 1994, but the overall pattern of who is having children still holds. Further, while only 8% of women with post graduate degrees have 4 or more children, 26% of those who never graduated from highschool have 4+ kids. Meanwhile, the age of first-time moms has continued to climb.

In other words, the strongest remover of genetic load–infant mortality–has all but disappeared; populations with higher load (lower IQ) are having more children than populations with lower load; and everyone is having children later, which also increases genetic load.

Take a moment to consider the high-infant mortality situation: an average couple has a dozen children. Four of them, by random good luck, inherit a good combination of the couple’s genes and turn out healthy and smart. Four, by random bad luck, get a less lucky combination of genes and turn out not particularly healthy or smart. And four, by very bad luck, get some unpleasant mutations that render them quite unhealthy and rather dull.

Infant mortality claims half their children, taking the least healthy. They are left with 4 bright children and 2 moderately intelligent children. The three brightest children succeed at life, marry well, and end up with several healthy, surviving children of their own, while the moderately intelligent do okay and end up with a couple of children.

On average, society’s overall health and IQ should hold steady or even increase over time, depending on how strong the selective pressures actually are.

Or consider a consanguineous couple with a high risk of genetic birth defects: perhaps a full 80% of their children die, but 20% turn out healthy and survive.

Today, by contrast, your average couple has two children. One of them is lucky, healthy, and smart. The other is unlucky, unhealthy, and dumb. Both survive. The lucky kid goes to college, majors in underwater intersectionist basket-weaving, and has one kid at age 40. That kid has Down Syndrome and never reproduces. The unlucky kid can’t keep a job, has chronic health problems, and 3 children by three different partners.

Your consanguineous couple migrates from war-torn Somalia to Minnesota. They still have 12 kids, but three of them are autistic with IQs below the official retardation threshold. “We never had this back in Somalia,” they cry. “We don’t even have a word for it.”

People normally think of dysgenics as merely “the dumb outbreed the smart,” but genetic load applies to everyone–men and women, smart and dull, black and white, young and especially old–because we all make random transcription errors when copying our DNA.

I could offer a list of signs of increasing genetic load, but there’s no way to avoid cherry-picking trends I already know are happening, like falling sperm counts or rising (diagnosed) autism rates, so I’ll skip that. You may substitute your own list of “obvious signs society is falling apart at the genes” if you so desire.

Nevertheless, the transition from 30% (or greater) infant mortality to almost 0% is amazing, both on a technical level and because it heralds an unprecedented era in human evolution. The selective pressures on today’s people are massively different from those our ancestors faced, simply because our ancestors’ biggest filter was infant mortality. Unless infant mortality acted completely at random–taking the genetically loaded and unloaded alike–or on factors completely irrelevant to load, the elimination of infant mortality must continuously increase the genetic load in the human population. Over time, if that load is not selected out–say, through more people being too unhealthy to reproduce–then we will end up with an increasing population of physically sick, maladjusted, mentally ill, and low-IQ people.

(Remember, all mental traits are heritable–so genetic load influences everything, not just controversial ones like IQ.)

If all of the above is correct, then I see only 4 ways out:

  1. Do nothing: Genetic load increases until the population is non-functional and collapses, resulting in a return of Malthusian conditions, invasion by stronger neighbors, or extinction.
  2. Sterilization or other weeding out of high-load people, coupled with higher fertility by low-load people
  3. Abortion of high load fetuses
  4. Genetic engineering

#1 sounds unpleasant, and #2 would result in masses of unhappy people. We don’t have the technology for #4, yet. I don’t think the technology is quite there for #2, either, but it’s much closer–we can certainly test for many of the deleterious mutations that we do know of.

Gay marriage didn’t win; traditional marriage lost

From the evolutionist point of view, the point of marriage is the production of children.

Let’s quickly analogize to food. Humans have a tremendous variety of customs, habits, traditions, and taboos surrounding foods. Foods enjoyed in one culture, like pork, crickets, and dog, are regarded as disgusting, immoral, or forbidden in another. Cheese is, at heart, rotten vomit–the enzyme used to make cheese coagulate is actually extracted from a calf’s stomach lining–and yet the average American eats it eagerly.

Food can remind you of your childhood, the best day of your life, the worst day of your life. It can comfort the sick and the mourning, and it accompanies our biggest celebrations of life.

Eh, I’d be happy giving him a microstate and seeing how he does running it.

We eat comfort food, holiday food, even sacrificial food. We have decadent luxuries and everyday staples. Some people, like vegans and ascetics, avoid large classes of food generally eaten by their own society for moral reasons.

People enjoy soda because it has water and calories, but some of us purposefully trick our taste buds by drinking Diet Coke, which delivers the sensation of drinking calories without the calories themselves. We enjoy the taste of calories even when we don’t need any more.

But the evolutionary purpose of eating is to get enough calories and nutrients to survive. If tomorrow we all stopped needing to eat–say, we were all hooked into a Matrix-style click-farm in which all nutrients were delivered automatically via IV–all of the symbolic and emotional content attached to food would wither away.

The extended helplessness of human infants is unique in the animal kingdom. Even elephants, who gestate for an incredible two years and become mature at 18, can stand and begin walking around shortly after birth. Baby elephants are not raised solely by their mothers, as baby rats are, but by an entire herd of related female elephants.

Elephants are remarkable animals, clever, communicative, and caring, who mourn their dead and create art:


But from the evolutionist point of view, the point of elephants’ family systems is still the production of elephant children.

Love is a wonderful, sweet, many-splendored thing, but the purpose of marriage, in all its myriad forms–polygamy, monogamy, polyandry, serial monogamy–is still the production of children.

There are a few societies where marriage as we know it is not really practiced because people depend on alternative kin networks or women can largely provide for themselves. For example, 70% of African American children are born out of wedlock; and among the avuncular Apache:

In the Southwest United States, the Apache tribe practices a form of this, where the uncle is responsible for teaching the children social values and proper behavior while inheritance and ancestry is reckoned through the mother’s family alone. (Modern day influences have somewhat but not completely erased this tradition.)

source: BBC News

Despite the long public argument over the validity of gay marriage, very few gay people actually want to get married. Gallop reports that after the Obergefell v. Hodges ruling, the percent of married gay people jumped quickly from 7.9% to 9.5%, but then leveled off, rising to only 9.6% by June 2016.

In contrast, 46% of US adults are married.

Even this number, though, is in sharp decline: in 1960, 72% of adults were married; by 2010, only 51% were.

The situation is similar throughout the Western world. Only 51% of Brits are married. In Italy, the crude marriage rate (the number of new marriages per 1,000 people), has fallen from 7.35 in 1970 to only 4.21 in 2007. Only 58.9% of Japanese are married.

Declining marriage rates across the developed world have been accompanied by declining fertility rates and rising illegitimacy rates:

Graph showing children per woman rate over the years 1960 – 2009 in USA, China, India, Germany, Russia population rates.

H/T: Share of Births to Unmarried Mothers by Race

As Wikipedia notes:

Only 2% of [Japanese] births occur outside of marriage[35] (compared to 30-60% in Europe and North America) due to social taboos, legal pressure, and financial hurdles.[32] Half of Japan’s single mothers live below the poverty line, among the highest for OECD countries.[36][37][38][39]

In other words, the Japanese welfare state, while generous, does not encourage single motherhood. Wikipedia also provides a discussion of the causes of declining Japanese marriage rates:

The annual number of marriages has dropped since the early 1970s, while divorces have shown a general upward trend.[29] …

The decline of marriage in Japan, as fewer people marry and do so later in life, is a widely cited explanation for the plummeting birth rate.[29][30][31][32] Although the total fertility rate has dropped since the 1970s (to 1.43 in 2013[33]), birth statistics for married women have remained fairly constant (at around 2.1) and most married couples have two or more children. Economic factors, such as the cost of raising a child, work-family conflicts, and insufficient housing, are the most common reasons for young mothers (under 34) to have fewer children than desired. …

Between 1990 and 2010, the percentage of 50-year-old people who had never married roughly quadrupled for men to 20.1% and doubled for women to 10.6%.[41][42] The Welfare Ministry predicts these numbers to rise to 29% of men and 19.2% of women by 2035.[43] The government’s population institute estimated in 2014 that women in their early 20s had a one-in-four chance of never marrying, and a two-in-five chance of remaining childless.[44]

Recent media coverage has sensationalized surveys from the Japan Family Planning Association and the Cabinet Office that show a declining interest in dating and sexual relationships among young people, especially among men.[44][45][46] However, changes in sexuality and fertility are more likely an outcome of the decline in family formation than its cause.[47][48] Since the usual purpose of dating in Japan is marriage, the reluctance to marry often translates to a reluctance to engage in more casual relationships.[30]

In other words, marriage is functionally about providing a supportive way of raising children. In a society where birth control does not exist, children born out of wedlock tend not to survive, and people can easily get jobs to support their families, people tended to get married and have children. In a society where people do not want children, cannot afford them, are purposefully delaying childbearing as long as possible, or have found ways to provide for them without getting married, people simply see no need for marriage.

“Marriage” ceases to mean what it once did, reserved for old-fashioned romantics and the few lucky enough to afford it.

Mass acceptance of gay marriage did change how people think of marriage, but it’s downstream from what the massive, societal-wide decrease in child-bearing and increase in illegitimacy have done to our ideas about marriage.

Species of Exit: Israel

Israel is–as far as I can tell–one of the sanest, least self-destructive states in the entire West. (Note: this is not to say that I love everything about Israel; this is actually a pretty low bar, given what’s happening everywhere else.) Their people are literate and healthy, they have a per capita GDP of 36.5k, (33rd in the world,) and they’re 18th globally on the Human Development Index. They don’t throw people off of buildings or have public floggings, and despite the fact that they have birth control and the state actually pays for abortions, the Jewish population still has a positive fertility rate:

The fertility rates of Jewish and Arab women were identical for the first time in Israeli history in 2015, according to figures released by the Israel Central Bureau of Statistics on Tuesday….Jewish and Arab women had given birth to an average of 3.13 children as of last year.

According to Newsweek:

This high fertility rate is not simply an artifact of Israel’s growing ultra-Orthodox or Haredi population; the non-Haredi fertility rate is 2.6. (This is, by the way, a far higher fertility rate than that of American Jews, which is 1.9; the replacement rate is 2.3.)

Did you think we were getting through this without a Polandball joke? And they’ve managed to resist getting conquered by their aggressive and numerically superior neighbors several times in the past century.

Not bad for a country that didn’t exist 100 years ago, had to be built from the sand up, and is filled with people whom conventional wisdom holds ought to have been rendered completely useless by multi-generational epigenetic trauma.

Now, yes, Israel does get a lot of support from the US, and who knows what it would look like (or if it would exist at all,) in an alternative timeline where the US ignores it. Israel probably isn’t perfect, just interesting.

Harking back to my Measures of Meaning post, I propose that Israel has 4 things going for it:

Ethiopian Jews
Ethiopian Jewish member of the IDF

1. Israelis have meaningful work. Their work has been, literally, to build and secure their nation. Israelis have had to build almost the entire infrastructure of their country over the past hundred years, from irrigation systems to roads to cities. Today, Tel Aviv is a city with a population of 430,000 people. In 1900, Tel Aviv didn’t exist.

Unlike the US, Israel has a draft: every Israeli citizen, male and female, has to serve in the Israeli army. (Obviously exceptions exist.) This is not seen as state-run slavery but part of making sure the entire society continues to exist, because Israel faces some pretty real threats to its borders.

The IDF even has a special division for autists:

Many autistic soldiers who would otherwise be exempt from military service have found a place in Unit 9900, a selective intelligence squad where their heightened perceptual skills are an asset. …

The relationship is a mutually beneficial one. For these young people, the unit is an opportunity to participate in a part of Israeli life that might otherwise be closed to them. And for the military, it’s an opportunity to harness the unique skill sets that often come with autism: extraordinary capacities for visual thinking and attention to detail, both of which lend themselves well to the highly specialized task of aerial analysis.

picture-5

I suspect–based on personal conversations–that there is something similar in the US military, but have no proof.

My anthropological work suggests that one of the reasons people enter the military is to find meaning in their lives, (though this doesn’t work nearly as well when your country does things like invade completely irrelevant countries you don’t actually care about like Vietnam.)

2. Like I said, Israelis have above-replacement total fertility–meaning that many Israelis hail from large families, with lots of children, siblings, and cousins. Israelis appear to have managed to achieve this in part by subsidizing births (which probably will have some long-term negative effects for them,*) and in part by explicitly advocating high birth rates in order to prevent themselves from being out-bred by the Palestinians and to show that Hitler what for.

*ETA: See the comments for a discussion of dysgenic fertility on Israel.

I have been saving this picture for so long3. Religion is so obviously a unifying force in Israeli life that I don’t think I need to detail it.

What about that fourth thing? Oh yes: Many of the Jews who don’t like the idea of “nations” and “ethno states” and “religion” probably moved to the US instead of Israel. The US got the SJW Jews and Israel got the nationalist Jews.

4. A sense of themselves as a distinct nation. As I’ve discussed before, this is not exactly genetic, due to different Jewish groups having absorbed about 50% of their DNA from the folks around them during the diaspora years, and of course a big part of the country is Arab/Palestinians, but there is still much genetically in common.

There is probably a lot I’m missing.

15han-2-master675Of course there are religious Jews in the US (and their numbers are growing relative to the secular Jewish population.) While Jews as a whole voted 70% for Hillary, only 56% of the Orthodox supported her. (I’ve seen different numbers elsewhere, but these are the ones I’ve been able to find a source for.)

(I suspect that America’s high-IQ secular Jews suffer from being in America instead of Israel. They don’t have religion to guide them, children to focus them, nor (in many cases) meaningful work. Without something positive to work towards, they turn to politics/ideology to provide meaning in their lives, while simultaneously suffering the psychological stress of knowing that the Holocaust was directed at people like them.)

But that’s irrelevant to Israeli Jews.

Long-term, I’m not bullish on Israel, given its precarious location, surrounded by nations that aren’t very fond of it–and I am not offering any opinions about the Israeli/Palestinian situation–but as first world nations go, it at least desires to keep existing.

A Fertility Story: (Warning, image heavy)

czhceepukaa2lq1

Tuesday’s post took longer to write than expected, so today’s post is being told entirely in images:

297px-world_population_v3-svg picture-5ba

cnn-trap-income dating

 

inflation-and-cost-of-goods

screenshot-2016-12-08-16-41-01

screenshot-2016-12-08-16-48-331

picture-40

 

intelligence vs desired number of children, from Cognitive dysgenics in the OKCupid dataset: a few simple analyses by Emil O. W. Kirkegaard http://emilkirkegaard.dk/en/?p=5942
intelligence vs desired number of children, from Cognitive dysgenics in the OKCupid dataset: a few simple analyses by Emil O. W. Kirkegaard — there are several other graphs in the post so be sure to check them out.

From Selection against variants in the genome associated with educational attainment (PDF)
From Selection against variants in the genome associated
with educational attainment
(PDF)

age-specific-fertilitty

cr7iifdusaanrw7

Source: CDC data, I believe

c16vcecucaes6-o

c16w8-dukaan7tj coezlimwyaalppmctnc2prwiaaatmg

picture-10 picture-20

ckw2n1sxiaegvhz picture-44

800px-acceleration1