Trying to be Smart: on bringing up extremely rare exceptions to prove forests don’t exist, only trees

When my kids don’t want to do their work (typically word problems in math,) they start coming up with all kinds of crazy scenarios to try to evade the question. “What if Susan cloned herself?” “What if Joe is actually the one driving the car, and he only saw the car pass by because he was looking at himself in a mirror?” “What if John used a wormhole to travel backwards in time and so all of the people at the table were actually Joe and so I only need to divide by one?” “What if Susan is actually a boy but her parents accidentally gave him the wrong name?” “What if ALIENS?”

After banging my head on the wall, I started asking, “Which is more likely: Sally and Susan are two different people, or Sally cloned herself, something no human has ever done before in the 300,000 years of homo Sapiens’ existence?” And sometimes they will, grudgingly, admit that their scenarios are slightly less likely than the assumptions the book is making.*

I forgive my kids, because they’re children. When adults do the same thing, I am much less sympathetic.

Folks on all sides of the political spectrum are probably guilty of this, but my inclinations/bubble lead me to encounter certain ones more often. Sex/gender is a huge one (even I have been led astray by sophistry on this subject, for which I apologize.)

Over in biology, sex is simply defined: Females produce large gametes. Males produce small gametes. It doesn’t matter how gametes are produced. It doesn’t matter what determines male or femaleness. All that matters is gamete size. There is no such thing (at least in humans) as a sex “spectrum”: reproduction requires one small gamete and one large gamete. Medium-sized gametes are not part of the process.

About 99.9% of people fit into the biological categories of “male” and “female.” An extremely small minority (<1%) have rare biological issues that interfere with gamete formation–people with Klinefelter’s, for example, are genetically XXY instead of XX or XY. People with Klinefelter’s are also infertile–unlike large gametes and small gametes, XXY isn’t part of a biological reproduction strategy. Like trisomy 21, it’s just an unfortunate accident in cell division.

In a mysterious twist, the vast majority of people have a “gender” identity that matches their biological sex. Even female athletes–women who excel at a stereotypically and highly masculine field–tend to identify as “women,” not men. Even male fashion designers tend to self-identify as men. There are a few people who identify as transgender, but in my personal experience, most of them are actually intersex in some way (eg, a woman who has autism, a condition characterized as “extreme male brain,” may legitimately feel like she thinks more like a guy than a girl.) Again, this is an extremely small percent of the population. For 99% of people you meet, normal gender assumptions apply.

So jumping into a conversation about “men” and “women” with “Well actually, ‘men’ and ‘women’ are just social constructs and gender is actually a spectrum and there are many different valid gender expressions–” is a great big NO.

Jumping into a discussion of women’s issues (like childbirth) with “Actually, men can give birth, too,” or the Women’s March with “Pussyhats are transphobic because some women have penises; vaginas don’t define what it means to be female,” is an even bigger NO, and I’m not even a fan of pussyhats.

Only biological females can give birth. That’s how the species works. When it comes to biology, leave things that you admit aren’t biology at the door. If a transgender man with a uterus gives birth to a child, he is still a biological female and we don’t need to confuse things by implying that someone gestated a fetus in his testicles. Over the millennia that humans have existed, a handful of people with some form of biological chimerism (basically, an internalized conjoined twin who never fully developed but ended up contributing an organ or two) who thought of themselves as male may have nonetheless given birth. These cases are so rare that you will probably never meet someone with them in your entire life.

Having lost a leg due to an accident (or 4 legs, due to being a pair of conjoined twins,) does not make “number of legs in humans” a spectrum ranging from 0-4. Humans have 2 legs; a few people have unfortunate accidents. Saying so doesn’t imply that people with 0 legs are somehow less human. They just had an accident.

In a conversation I read recently, Person A asserted that if two blue-eyed parents had a brown-eyed baby, the mother would be suspected of infidelity. A whole bunch of people immediately jumped on Person A, claiming he was scientifically ignorant and hadn’t paid attention in school–sadly, these overconfident people are actually the ones who don’t understand genetics, because blue eyes are recessive and thus two blue eyed people can’t make a brown-eyed biological child.  A few people, however, asserted that Person A was scientifically illiterate because there is an extremely rare brown-eyed gene that two blue-eyed people can carry, resulting in a brown-eyed child.

But this is not scientific illiteracy. The recessive brown-eyed gene is extremely rare, and both parents would have to have it. Infidelity, by contrast, is much more common. It’s not that common, but it’s more common than two parent both having recessive brown-eyed genes. Insisting that Person A is scientifically illiterate because of an extremely rare exception to the rule is ignoring statistics–statistically, the child is more likely to be not biological than to have an extremely rare variant. Statistically, men and women are far more likely to match in gender and sex than to not.

Let’s look at immigration, another topic near and dear to everyone’s hearts. After Trump’s comments about Haiti came out (and let’s be honest, Haiti’s capital, Port au Prince, is one of the world’s largest cities without a functioning sewer system, so “shithole” is actually true,) people began popping up with statements like “I’d rather a Ugandan immigrant who believes in American values than a socialist Norwegian.”

I, too, would rather a Ugandan with American values than a socialist Norwegian. However, what percentage of Ugandans actually have American values? Just a wild guess, but I suspect most Ugandans have Ugandan values. Most Ugandans probably think Ugandan culture is pretty nice and that Ugandan norms and values are the right ones to have, otherwise they wouldn’t have different values and we’d call those Ugandan values.

Updated values chart!

While we’re at it, I suspect most Chinese people have Chinese values, most Australians have Australian values, most Brazilians hold Brazilian values, and most people from Vatican City have Catholic values.

I don’t support blindly taking people from any country, because some people are violent criminals just trying to escape conviction. But some countries are clearly closer to each other, culturally, than others, and thus have a larger pool of people who hold each other’s values.

(Even when people hold very different values, some values conflict more than others.)

To be clear: I’ve been picking on one side, but I’m sure both sides do this.

What’s the point? None of this is very complicated. Most people can figure out if a person they have just met is male or female instantly and without fail. It takes a very smart person to get confused by a few extremely rare exceptions into thinking that the broad categories don’t functionally exist.

Sometimes this obfuscation is compulsive–the person just wants to show how smart they are, or maybe everyone around them is saying it so they start repeating it–but since most people seem capable of understanding probabilities in everyday life (“Sometimes the stoplight is glitched but usually it isn’t, so I’ll assume the stoplight is functioning properly and obey it,”) if someone suddenly seems incapable of distinguishing between extremely rare and extremely common events in the political realm, then they are doing so on purpose or suffering severe cognitive dissonance.


*Oddly, I solved the problem by giving the kids harder problems. It appears that when their brains are actively engaged with trying to solve the problem, they don’t have time/energy left to come up with alternatives. When the material is too easy (or, perhaps, way too hard) they start trying to get creative to make things more interesting.



North Africa in Genetics and History

detailed map of African and Middle Eastern ethnicities in Haaks et al’s dataset

North Africa is an often misunderstood region in human genetics. Since it is in Africa, people often assume that it contains the same variety of people referenced in terms like “African Americans,” “black Africans,” or even just “Africans.” In reality, the African content contains members of all three of the great human clades–Sub-Saharan Africans in the south, Polynesians (Asian clade) in Madagascar, and Caucasians in the north.

The North African Middle Stone Age and its place in recent human evolution provides an overview of the first 275,000 years of humanity’s history in the region(300,000-25,000 years ago, more or less), including the development of symbolic culture and early human dispersal. Unfortunately the paper is paywalled.

Throughout most of human history, the Sahara–not the Mediterranean or Red seas–has been the biggest local impediment to human migration–thus North Africans are much closer, genetically, to their neighbors in Europe and the Middle East than their neighbors across the desert (and before the domestication of the camel, about 3,000 years ago, the Sahara was even harder to cross.)

But from time to time, global weather patterns change and the Sahara becomes a garden: the Green Sahara. The last time we had a Green Sahara was about 9-7,000 years ago; during this time, people lived, hunted, fished, herded and perhaps farmed throughout areas that are today nearly uninhabited wastes.

The Peopling of the last Green Sahara revealed by high-coverage resequencing of trans-Saharan patrilineages sheds light on how the Green (and subsequently brown) Sahara affected the spread (and separation) of African groups into northern and sub-Saharan:

In order to investigate the role of the last Green Sahara in the peopling of Africa, we deep-sequence the whole non-repetitive portion of the Y chromosome in 104 males selected as representative of haplogroups which are currently found to the north and to the south of the Sahara. … We find that the coalescence age of the trans-Saharan haplogroups dates back to the last Green Sahara, while most northern African or sub-Saharan clades expanded locally in the subsequent arid phase. …

Our findings suggest that the Green Sahara promoted human movements and demographic expansions, possibly linked to the adoption of pastoralism. Comparing our results with previously reported genome-wide data, we also find evidence for a sex-biased sub-Saharan contribution to northern Africans, suggesting that historical events such as the trans-Saharan slave trade mainly contributed to the mtDNA and autosomal gene pool, whereas the northern African paternal gene pool was mainly shaped by more ancient events.

In other words, modern North Africans have some maternal (female) Sub-Saharan DNA that arrived recently via the Islamic slave trade, but most of their Sub-Saharan Y-DNA (male) is much older, hailing from the last time the Sahara was easy to cross.

Note that not much DNA is shared across the Sahara:

After the African humid period, the climatic conditions became rapidly hyper-arid and the Green Sahara was replaced by the desert, which acted as a strong geographic barrier against human movements between northern and sub-Saharan Africa.

A consequence of this is that there is a strong differentiation in the Y chromosome haplogroup composition between the northern and sub-Saharan regions of the African continent. In the northern area, the predominant Y lineages are J-M267 and E-M81, with the former being linked to the Neolithic expansion in the Near East and the latter reaching frequencies as high as 80 % in some north-western populations as a consequence of a very recent local demographic expansion [810]. On the contrary, sub-Saharan Africa is characterised by a completely different genetic landscape, with lineages within E-M2 and haplogroup B comprising most of the Y chromosomes. In most regions of sub-Saharan Africa, the observed haplogroup distribution has been linked to the recent (~ 3 kya) demic diffusion of Bantu agriculturalists, which brought E-M2 sub-clades from central Africa to the East and to the South [1117]. On the contrary, the sub-Saharan distribution of B-M150 seems to have more ancient origins, since its internal lineages are present in both Bantu farmers and non-Bantu hunter-gatherers and coalesce long before the Bantu expansion [1820].

In spite of their genetic differentiation, however, northern and sub-Saharan Africa share at least four patrilineages at different frequencies, namely A3-M13, E-M2, E-M78 and R-V88.

A recent article in Nature, “Whole Y-chromosome sequences reveal an extremely recent origin of the most common North African paternal lineage E-M183 (M81),” tells some of North Africa’s fascinating story:

Here, by using whole Y chromosome sequences, we intend to shed some light on the historical and demographic processes that modelled the genetic landscape of North Africa. Previous studies suggested that the strategic location of North Africa, separated from Europe by the Mediterranean Sea, from the rest of the African continent by the Sahara Desert and limited to the East by the Arabian Peninsula, has shaped the genetic complexity of current North Africans15,16,17. Early modern humans arrived in North Africa 190–140 kya (thousand years ago)18, and several cultures settled in the area before the Holocene. In fact, a previous study by Henn et al.19 identified a gradient of likely autochthonous North African ancestry, probably derived from an ancient “back-to-Africa” gene flow prior to the Holocene (12 kya). In historic times, North Africa has been populated successively by different groups, including Phoenicians, Romans, Vandals and Byzantines. The most important human settlement in North Africa was conducted by the Arabs by the end of the 7th century. Recent studies have demonstrated the complexity of human migrations in the area, resulting from an amalgam of ancestral components in North African groups15,20.

According to the article, E-M81 is dominant in Northwest Africa and absent almost everywhere else in the world.

The authors tested various men across north Africa in order to draw up a phylogenic tree of the branching of E-M183:

The distribution of each subhaplogroup within E-M183 can be observed in Table 1 and Fig. 2. Indeed, different populations present different subhaplogroup compositions. For example, whereas in Morocco almost all subhaplogorups are present, Western Sahara shows a very homogeneous pattern with only E-SM001 and E-Z5009 being represented. A similar picture to that of Western Sahara is shown by the Reguibates from Algeria, which contrast sharply with the Algerians from Oran, which showed a high diversity of haplogroups. It is also worth to notice that a slightly different pattern could be appreciated in coastal populations when compared with more inland territories (Western Sahara, Algerian Reguibates).

Overall, the authors found that the haplotypes were “strikingly similar” to each other and showed little geographic structure besides the coastal/inland differences:

As proposed by Larmuseau et al.25, the scenario that better explains Y-STR haplotype similarity within a particular haplogroup is a recent and rapid radiation of subhaplogroups. Although the dating of this lineage has been controversial, with dates proposed ranging from Paleolithic to Neolithic and to more recent times17,22,28, our results suggested that the origin of E-M183 is much more recent than was previously thought. … In addition to the recent radiation suggested by the high haplotype resemblance, the pattern showed by E-M183 imply that subhaplogroups originated within a relatively short time period, in a burst similar to those happening in many Y-chromosome haplogroups23.

In other words, someone went a-conquering.

Alternatively, given the high frequency of E-M183 in the Maghreb, a local origin of E-M183 in NW Africa could be envisaged, which would fit the clear pattern of longitudinal isolation by distance reported in genome-wide studies15,20. Moreover, the presence of autochthonous North African E-M81 lineages in the indigenous population of the Canary Islands, strongly points to North Africa as the most probable origin of the Guanche ancestors29. This, together with the fact that the oldest indigenous inviduals have been dated 2210 ± 60 ya, supports a local origin of E-M183 in NW Africa. Within this scenario, it is also worth to mention that the paternal lineage of an early Neolithic Moroccan individual appeared to be distantly related to the typically North African E-M81 haplogroup30, suggesting again a NW African origin of E-M183. A local origin of E-M183 in NW Africa > 2200 ya is supported by our TMRCA estimates, which can be taken as 2,000–3,000, depending on the data, methods, and mutation rates used.

However, the authors also note that they can’t rule out a Middle Eastern origin for the haplogroup since their study simply doesn’t include genomes from Middle Eastern individuals. They rule out a spread during the Neolithic expansion (too early) but not the Islamic expansion (“an extensive, male-biased Near Eastern admixture event is registered ~1300 ya, coincidental with the Arab expansion20.”) Alternatively, they suggest E-M183 might have expanded near the end of the third Punic War. Sure, Carthage (in Tunisia) was defeated by the Romans, but the era was otherwise one of great North African wealth and prosperity.


Interesting papers! My hat’s off to the authors. I hope you enjoyed them and get a chance to RTWT.

How to Minimize “Emotional Labor” and “Mental Load”: A Guide for Frazzled Women

A comic strip in the Guardian recently alerted me to the fact that many women are exhausted from the “Mental Load” of thinking about things and need their husbands to pitch in and help. Go ahead and read it.

Whew. There’s a lot to unpack here:

  1. Yes, you have to talk to men. DO NOT EXPECT OTHER PEOPLE TO KNOW WHAT YOU ARE THINKING. Look, if I can get my husband to help me when I need it, you certainly can too. That or you married the wrong man.
  2. Get a dayplanner and write things like “grocery lists” and doctors appointments in it. There’s probably one built into your phone.

There, I solved your problems.

That said, female anxiety (at least in our modern world) appears to be a real thing:

(though American Indians are the real untold story in this graph.)

According to the America’s State of Mind Report (PDF):

Medco data shows that antidepressants are the most commonly used mental health medications and that women have the highest utilization rates.  In 2010, 21 percent of women ages 20 and older were using an antidepressant.  … Men’s use of antidepressants is almost half that of women, but has also been on the rise with a 28 percent increase over the past decade. …

Anxiety disorders are the most common psychiatric illnesses affecting children and adults. … Although anxiety disorders are highly treatable, only about one‐third of sufferers receive treatment. …

Medco data shows that women have the highest utilization rate of anti‐anxiety medications; in
fact, 11 percent of middle‐aged women (ages 45‐64) were on an anti‐anxiety drug treatment in
2010, nearly twice the rate of their male counterparts (5.7 percent).

And based on the age group data, women in their prime working years (but waning childbearing years) have even higher rates of mental illness. (Adult women even take ADHD medicine at slightly higher rates than adult men.)

What causes this? Surely 20% of us–one in 5–can’t actually be mentally ill, can we? Is it biology or culture? Or perhaps a mismatch between biology and culture?

Or perhaps we should just scale back a little, and when we have friends over for dinner, just order a pizza instead of trying to cook two separate meals?

But if you think that berating your husband for merely taking a bottle out of the dishwasher when you asked him to get a bottle out of the dishwasher (instead of realizing this was code for “empty the entire dishwasher”) will make you happier, think again. “Couples who share the workload are more likely to divorce, study finds“:

Divorce rates are far higher among “modern” couples who share the housework than in those where the woman does the lion’s share of the chores, a Norwegian study has found. …

Norway has a long tradition of gender equality and childrearing is shared equally between mothers and fathers in 70 per cent of cases.But when it comes to housework, women in Norway still account for most of it in seven out of 10 couples. The study emphasised women who did most of the chores did so of their own volition and were found to be as “happy” those in “modern” couples. …

The researchers expected to find that where men shouldered more of the burden, women’s happiness levels were higher. In fact they found that it was the men who were happier while their wives and girlfriends appeared to be largely unmoved.

Those men who did more housework generally reported less work-life conflict and were scored slightly higher for wellbeing overall.

Theory: well-adjusted people who love each other are happy to do what it takes to keep the household running and don’t waste time passive-aggressively trying to convince their spouse that he’s a bad person for not reading her mind.

Now let’s talk about biology. The author claims,

Of course, there’s nothing genetic or innate about this behavior. We’re not born with an all-consuming passion for clearing tables, just like boys aren’t born with an utter disinterest for thing lying around.

Of course, the author doesn’t cite any papers from the fields of genetics or behavior psychology to back up her claims–just like she feels entitled to claim that other people should read her mind and absurdly thinks that a good project manager at work doesn’t bother to tell their team what needs to be done, she doesn’t feel any compulsion to cite any proof of her claims. Science says s. We know because some cartoonist on the internet claimed it did.

Over in reality-land, when we make scientific claims about things like genetics, we cite our sources. And women absolutely have an instinct for cleaning things: the Nesting Instinct. No, it isn’t present when we’re born. It kicks in when we’re pregnant–often shortly before going into labor. Here’s an actual scientific paper on the Nesting Instinct published in the scientific journal Evolution and Human Behavior:

In altricial mammals, “nesting” refers to a suite of primarily maternal behaviours including nest-site selection, nest building and nest defense, and the many ways that nonhuman animals prepare themselves for parturition are well studied. In contrast, little research has considered pre-parturient preparation behaviours in women from a functional perspective.

According to the university’s press release about the study:

The overwhelming urge that drives many pregnant women to clean, organize and get life in order—otherwise known  as nesting—is not irrational, but an adaptive behaviour stemming from humans’ evolutionary past.

Researchers from McMaster University suggest that these behaviours—characterized by unusual bursts of energy and a compulsion to organize the household—are a result of a mechanism to protect and prepare for the unborn baby.

Women also become more selective about the company they keep, preferring to spend time only with people they trust, say researchers.

In short, having control over the environment is a key feature of preparing for childbirth, including decisions about where the birth will take place and who will be welcome.

“Nesting is not a frivolous activity,” says Marla Anderson, lead author of the study and a graduate student in the Department of Psychology, Neuroscience & Behaviour.  “We have found that it peaks in the third trimester as the birth of the baby draws near and is an important task that probably serves the same purpose in women as it does in other animals.”

Even Wikipeidia cites a number of sources on the subject:

Nesting behaviour refers to an instinct or urge in pregnant animals caused by the increase of estradiol (E2) [1] to prepare a home for the upcoming newborn(s). It is found in a variety of animals such as birds, fish, squirrels, mice and pigs as well as humans.[2][3]

Nesting is pretty much impossible to miss if you’ve ever been pregnant or around pregnant women.

Of course, this doesn’t prove the instinct persists (though in my personal case it definitely did.)

By the way, estradiol is a fancy name for estrogen, which is found in much higher levels in women than men. (Just to be rigorous, here’s data on estrogen levels in normal men and women.)

So if high estradiol levels make a variety of mammals–including humans–want to clean things, and women between puberty and menopause consistently have higher levels of estrogen than men, then it seems fairly likely that women actually do have, on average, a higher innate, biological, instinctual, even genetic urge to clean and organize their homes than men do.

But returning to the comic, the author claims:

But we’re born into a society in which very early on, we’re given dolls and miniature vacuum cleaners, and in which it seems shameful for boys to like those same toys.

What bollocks. I used to work at a toystore. Yes, we stocked toy vacuum cleaners and the like in a “Little Helpers” set. We never sold a single one, and I worked there over Christmas. (Great times.)

I am always on the lookout for toys my kids would enjoy and receive constant feedback on whether they like my choices. (“A book? Why did Santa bring me a book? Books are boring!”)

I don’t spend money getting more of stuff my kids aren’t interested in. A child who doesn’t like dolls isn’t going to get a bunch of dolls and be ordered to sit and play with them and nothing else. A child who doesn’t like trucks isn’t going to get a bunch of trucks.

Assuming that other parents are neither stupid (unable to tell which toys their children like) nor evil (forcing their children to play with specific toys even though they know they don’t like them,) I conclude that children’s toys reflect the children’s actual preferences, not the parents’ (for goodness’s sakes, it if it were up to me, I’d socialize my children to be super-geniuses who spend all of their time reading textbooks and whose toys are all science and math manipulatives, not toy dump trucks!)

Even young rhesus monkeys–who cannot talk and obviously have not been socialized into human gender norms–have the same gendered toy preferences as humans:

We compared the interactions of 34 rhesus monkeys, living within a 135 monkey troop, with human wheeled toys and plush toys. Male monkeys, like boys, showed consistent and strong preferences for wheeled toys, while female monkeys, like girls, showed greater variability in preferences. Thus, the magnitude of preference for wheeled over plush toys differed significantly between males and females. The similarities to human findings demonstrate that such preferences can develop without explicit gendered socialization.

Young female chimps also make their own dolls:

Now new research suggests that such gender-driven desires are also seen in young female chimpanzees in the wild—a behavior that possibly evolved to make the animals better mothers, experts say.

Young females of the Kanyawara chimpanzee community in Kibale National Park, Uganda, use sticks as rudimentary dolls and care for them like the group’s mother chimps tend to their real offspring. The behavior, which was very rarely observed in males, has been witnessed more than a hundred times over 14 years of study.

In Jane Goodall’s revolutionary research on the Gombe Chimps, she noted the behavior of young females who often played with or held their infant siblings, in contrast to young males who generally preferred not to.

And just as estradiol levels have an effect on how much cleaning women want to do, so androgen levels have an effect on which toys children prefer to play with:

Gonadal hormones, particularly androgens, direct certain aspects of brain development and exert permanent influences on sex-typical behavior in nonhuman mammals. Androgens also influence human behavioral development, with the most convincing evidence coming from studies of sex-typical play. Girls exposed to unusually high levels of androgens prenatally, because they have the genetic disorder, congenital adrenal hyperplasia (CAH), show increased preferences for toys and activities usually preferred by boys, and for male playmates, and decreased preferences for toys and activities usually preferred by girls. Normal variability in androgen prenatally also has been related to subsequent sex-typed play behavior in girls, and nonhuman primates have been observed to show sex-typed preferences for human toys. These findings suggest that androgen during early development influences childhood play behavior in humans at least in part by altering brain development.

But the author of the comic strip would like us to believe that gender roles are a result of watching the wrong stuff on TV:

And in which culture and media essentially portray women as mothers and wives, while men are heroes who go on fascinating adventures away from home.

I don’t know about you, but I grew up in the Bad Old Days of the 80s when She-Ra, Princess of Power, was kicking butt on TV; little girls were being magically transported to Ponyland to fight evil monsters: and Rainbow Bright defeated the evil King of Shadows and saved the Color Kids.


If you’re older than me, perhaps you grew up watching Wonder Woman (first invented in 1941) and Leia Skywalker; and if you’re younger, Dora the Explorer and Katniss Everdeen.

If you can’t find adventurous female characters in movies or TV, YOU AREN’T LOOKING.

I mentioned this recently: it’s like the Left has no idea what the past–anytime before last Tuesday–actually contained. Somehow the 60s, 70s, 80s, 90s, and 2000s have entirely disappeared, and they live in a timewarp where we are connected directly to the media and gender norms of over half a century ago.

Enough. The Guardian comic is a load of entitled whining from someone who actually thinks that other people are morally obligated to try to read her mind. She has the maturity of a bratty teenager (“You should have known I hate this band!”) and needs to learn how to actually communicate with others instead of complaining that it’s everyone else who has a problem.


Anthropology Friday: Numbers and the Making of Us, by Caleb Everett pt. 4

Yes, but which 25% of us is grape?

Welcome to our final post on Numbers and the Making of Us: Counting and the Course of Human Cultures, by Caleb Everett. Today I just want to highlight a few interesting passages.


For example, there is about 25% overlap between the human genome and that of grapes. (And we have fewer genes than grapes!) So some caution should be exercised before reading too much into percentages of genomic correspondence across species. I doubt, after all that you consider yourself one-quarter grape. … canine and bovine species generally exhibit about an 85% rate of genomic correspondence with humans. … small changes in genetic makeup can, among other influences, lead to large changes in brain size.

On the development of numbers:

Babylonian math homework

After all, for the vast majority of our species’ existence, we lived as hunters and gatherers in Africa … A reasonable interpretation of the contemporary distribution of cultural and number-system types, then, is that humans did not rely on complex number system for the bulk of their history. We can also reasonably conclude that transitions to larger, more sedentary, and more trade-based cultures helped pressure various groups to develop more involved numerical technologies. … Written numerals, and writing more generally, were developed first in the Fertile Crescent after the agricultural revolution began there. … These pressures ultimately resulted in numerals and other written symbols, such as the clay-token based numerals … The numerals then enabled new forms of agriculture and trade that required the exact discrimination and representation of quantities. The ancient Mesopotamian case is suggestive, then, of the motivation for the present-day correlation between subsistence and number types: larger agricultural and trade-based economies require numerical elaboration to function. …

Intriguingly, though, the same maybe true of Chinese writing, the earliest samples of which date to the Shang Dynasty and are 3,000 years old. The most ancient of these samples are oracle bones. These bones were inscribed with nuemerals quantifying such items as enemy prisoners, birds and animals hunted, and sacrificed animals. … Ancient writing around the world is numerically focused.

Changes in the Jungle as population growth makes competition for resources more intense and forces people out of their traditional livelihoods:

Consider the case of one of my good friends, a member of an indigenous group known as the Karitiana. … Paulo spent the majority of his childhood, in the 1980s and 1990s in the largest village of his people’s reservation. … While some Karitiana sought to make a living in nearby Porto Velho, many strived to maintain their traditional way of life on their reservation. At the time this was feasible, and their traditional subsistence strategies of hunting, gathering, and horticulture could be realistically practiced. Recently, however, maintaining their conventional way of life has become a less tenable proposition. … many Karitiana feel they have little choice but to seek employment in the local Brazilian economy… This is certainly true of Paulo. He has been enrolled in Brazilian schools for some time, has received some higher education, and is currently employed by a governmental organization. To do these things, of course, Paulo had to learn Portuguese grammar and writing. And he had to learn numbers and math, also. In short, the socioeconomic pressures he has felt to acquire the numbers of another culture are intense.

Everett cites a statistic that >90% of the world’s approximately 7,000 languages are endangered.

They are endangered primarily because people like Paulo are being conscripted into larger nation-states, gaining fluency in more economically viable languages. … From New Guinea to Australia to Amazonia and elsewhere, the mathematizing of people is happening.

On the advantages of different number systems:

Recent research also suggests that the complexity of some non-linguistic number systems have been under appreciated. Many counting boards and abaci that have been used, and are still in use across the world’s culture, present clear advantages to those using them … the abacus presents some cognitive advantages. That is because, research now suggests, children who are raised using the abacus develop a “mental abacus” with time. … According to recent cross-cultural findings, practitioners of abacus-based mathematical strategies outperform those unfamiliar with such strategies,a t least in some mathematical tasks. The use of the Soroban abacus has, not coincidentally, now been adopted in many schools throughout Asia.

The zero is a dot in the middle of the photo–earliest known zero, Cambodia

I suspect these higher math scores are more due to the mental abilities of the people using the abacus than the abacus itself. I have also just ordered an abacus.

… in 2015 the world’s oldest known unambiguous inscription of a circular zero was rediscovered in Cambodia. The zero in question, really a large dot, serves as a placeholder in the ancient Khmer numeral for 605. It is inscribed on a stone tablet, dating to 683 CE, that was found only kilometers from the faces of Bayon and other ruins of Angkor Wat and Angkor Thom. … the Maya also developed a written form for zero, and the Inca encoded the concept in their Quipu.

In 1202, Fibonacci wrote the Book of Calculation, which promoted the use of the superior Arabic (yes Hindu) numerals (zero included) over the old Roman ones. Just as the introduction of writing jump-started the Cherokee publishing industry, so the introduction of superior numerals probably helped jump-start the Renaissance.

Cities and the rise of organized religion:

…although creation myths, animistic practices, and other forms of spiritualism are universal or nearly universal, large-scale hierarchical religions are restricted to relatively few cultural lineages. Furthermore, these religions… developed only after people began living in larger groups and settlements because of their agricultural lifestyles. … A phalanx of scholars has recently suggested that the development of major hierarchical religions, like the development of hierarchical governments, resulted from the agglomeration of people in such places. …

Organized religious beliefs, with moral-enforcing deities and priest case, were a by-product of the need for large groups of people to cooperate via shared morals and altruism. As the populations of cultures grew after the advent of agricultural centers… individuals were forced to rely on shared trust with many more individuals, including non-kin, than was or is the case in smaller groups like bands or tribes. … Since natural selection is predicated on the protection of one’s genes, in-group altruism and sacrifice are easier to make sense of in bands and tribes. But why would humans in much larger populations–humans who have no discernible genetic relationship… cooperate with these other individuals in their own culture? … some social mechanism had to evolve so that larger cultures would not disintegrate due to competition among individuals and so that many people would not freeload off the work of others. One social mechanism that foster prosocial and cooperative behavior is an organized religion based on shared morals and omniscient deities capable of keeping track of the violation of such morals. …

When Moses descended from Mt. Sinai with his stone tablets, they were inscribed with ten divine moral imperatives. … Why ten? … Here is an eleventh commandment that could likely be uncontroversially adopted by many people: “thou shalt not torture.” … But then the list would appear to lose some of its rhetorical heft. “eleven commandments’ almost hints of a satirical deity.

Technically there are 613 commandments, but that’s not nearly as catchy as the Ten Commandments–inadvertently proving Everett’s point.

Overall, I found this book frustrating and repetitive, but there were some good parts. I’ve left out most of the discussion of the Piraha and similar cultures, and the rather fascinating case of Nicaraguan homesigners (“homesigners” are deaf people who were never taught a formal sign language but made up their own.) If you’d like to learn more about them, you might want to look up the book at your local library.

Is Crohn’s Disease Tuberculosis of the Intestines?

Source: Rise in Crohn’s Disease admission rates, Glasgow

Crohn‘s is an inflammatory disease of the digestive tract involving diarrhea, vomiting internal lesions, pain, and severe weight loss. Left untreated, Crohn’s can lead to death through direct starvation/malnutrition, infections caused by the intestinal walls breaking down and spilling feces into the rest of the body, or a whole host of other horrible symptoms, like pyoderma gangrenosum–basically your skin just rotting off.

Crohn’s disease has no known cause and no cure, though several treatments have proven effective at putting it into remission–at least temporarily.

The disease appears to be triggered by a combination of environmental, bacterial, and genetic factors–about 70 genes have been identified so far that appear to contribute to an individual’s chance of developing Crohn’s, but no gene has been found yet that definitely triggers it. (The siblings of people who have Crohn’s are more likely than non-siblings to also have it, and identical twins of Crohn’s patients have a 55% chance of developing it.) A variety of environmental factors, such as living in a first world country, (parasites may be somewhat protective against the disease), smoking, or eating lots of animal protein also correlate with Crohn’s, but since only 3.2/1000 people even in the West have it’s, these obviously don’t trigger the disease in most people.

Crohn’s appears to be a kind of over-reaction of the immune system, though not specifically an auto-immune disorder, which suggests that a pathogen of some sort is probably involved. Most people are probably able to fight off this pathogen, but people with a variety of genetic issues may have more trouble–according to Wikipedia, “There is considerable overlap between susceptibility loci for IBD and mycobacterial infections.[62] ” Mycobacteria are a genus of of bacteria that includes species like tuberculosis and leprosy. A variety of bacteria–including specific strains of e coli, yersinia, listeria, and Mycobacterium avium subspecies paratuberculosis–are found in the intestines of Crohn’s suffers at higher rates than in the intestines of non-sufferers (intestines, of course, are full of all kinds of bacteria.)

Source: The Gutsy Group

Crohn’s treatment depends on the severity of the case and specific symptoms, but often includes a course of antibiotics, (especially if the patient has abscesses,) tube feeding (in acute cases where the sufferer is having trouble digesting food,) and long-term immune-system suppressants such as prednisone, methotrexate, or infliximab. In severe cases, damaged portions of the intestines may be cut out. Before the development of immunosuppressant treatments, sufferers often progressively lost more and more of their intestines, with predictably unpleasant results, like no longer having a functioning colon. (70% of Crohn’s sufferers eventually have surgery.)

A similar disease, Johne’s, infects cattle. Johne’s is caused by Mycobacterium avium subspecies paratuberculosis, (hereafter just MAP). MAP typically infects calves at birth, transmitted via infected feces from their mothers, incubates for two years, and then manifests as diarrhea, malnutrition, dehydration, wasting, starvation, and death. Luckily for cows, there’s a vaccine, though any infectious disease in a herd is a problem for farmers.

If you’re thinking that “paratuberculosis” sounds like “tuberculosis,” you’re correct. When scientists first isolated it, they thought the bacteria looked rather like tuberculosis, hence the name, “tuberculosis-like.” The scientists’ instincts were correct, and it turns out that MAP is in the same bacterial genus as tuberculosis and leprosy (though it may be more closely related to leprosy than TB.) (“Genus” is one step up from “species;” our species is “homo Sapiens;” our genus, homo, we share with homo Neanderthalis, homo Erectus, etc, but chimps and gorillas are not in the homo genus.)

A: Crohn’s Disease in Humans. Figure B: Johne’s Disease in Animals. Greenstein Lancet Infectious Disease, 2004, H/T Human Para Foundation

The intestines of cattle who have died of MAP look remarkably like the intestines of people suffering from advanced Crohn’s disease.

MAP can actually infect all sorts of mammals, not just cows, it’s just more common and problematic in cattle herds. (Sorry, we’re not getting through this post without photos of infected intestines.)

So here’s how it could work:

The MAP bacteria–possibly transmitted via milk or meat products–is fairly common and infects a variety of mammals. Most people who encounter it fight it off with no difficulty (or perhaps have a short bout of diarrhea and then recover.)

A few people, though, have genetic issues that make it harder for them to fight off the infection. For example, Crohn’s sufferers produce less intestinal mucus, which normally acts as a barrier between the intestines and all of the stuff in them.

Interestingly, parasite infections can increase intestinal mucus (some parasites feed on mucus), which in turn is protective against other forms of infection; decreasing parasite load can increase the chance of other intestinal infections.

Once MAP enters the intestinal walls, the immune system attempts to fight it off, but a genetic defect in microphagy results in the immune cells themselves getting infected. The body responds to the signs of infection by sending more immune cells to fight it, which subsequently also get infected with MAP, triggering the body to send even more immune cells. These lumps of infected cells become the characteristic ulcerations and lesions that mark Crohn’s disease and eventually leave the intestines riddled with inflamed tissue and holes.

The most effective treatments for Crohn’s, like Infliximab, don’t target infection but the immune system. They work by interrupting the immune system’s feedback cycle so that it stops sending more cells to the infected area, giving the already infected cells a chance to die. It doesn’t cure the disease, but it does give the intestines time to recover.

Unfortunately, this means infliximab raises your chance of developing TB:

There were 70 reported cases of tuberculosis after treatment with infliximab for a median of 12 weeks. In 48 patients, tuberculosis developed after three or fewer infusions. … Of the 70 reports, 64 were from countries with a low incidence of tuberculosis. The reported frequency of tuberculosis in association with infliximab therapy was much higher than the reported frequency of other opportunistic infections associated with this drug. In addition, the rate of reported cases of tuberculosis among patients treated with infliximab was higher than the available background rates.

because it is actively suppressing the immune system’s ability to fight diseases in the TB family.

Luckily, if you live in the first world and aren’t in prison, you’re unlikely to catch TB–only about 5-10% of the US population tests positive for TB, compared to 80% in many African and Asian countries. (In other words, increased immigration from these countries will absolutely put Crohn’s suffers at risk of dying.)

There are a fair number of similarities between Crohn’s, TB, and leprosy is that they are all very slow diseases that can take years to finally kill you. By contrast, other deadly diseases, like smallpox, cholera, and yersinia pestis (plague), spread and kill extremely quickly. Within about two weeks, you’ll definitely know if your plague infection is going to kill you or not, whereas you can have leprosy for 20 years before you even notice it.

TB, like Crohn’s, creates granulomas:

Tuberculosis is classified as one of the granulomatous inflammatory diseases. Macrophages, T lymphocytes, B lymphocytes, and fibroblasts aggregate to form granulomas, with lymphocytes surrounding the infected macrophages. When other macrophages attack the infected macrophage, they fuse together to form a giant multinucleated cell in the alveolar lumen. The granuloma may prevent dissemination of the mycobacteria and provide a local environment for interaction of cells of the immune system.[63] However, more recent evidence suggests that the bacteria use the granulomas to avoid destruction by the host’s immune system. … In many people, the infection waxes and wanes.

Crohn’s also waxes and wanes. Many sufferers experience flare ups of the disease, during which they may have to be hospitalized, tube fed, and put through another round of antibiotics or sectioning (surgical removal of the intestines) before they improve–until the disease flares up again.

Leprosy is also marked by lesions, though of course so are dozens of other diseases.

Note: Since Crohn’s is a complex, multi-factorial disease, there may be more than one bacteria or pathogen that could infect people and create similar results. Alternatively, Crohn’s sufferers may simply have intestines that are really bad at fighting off all sorts of diseases, as a side effect of Crohn’s, not a cause, resulting in a variety of unpleasant infections.

The MAP hypothesis suggests several possible treatment routes:

  1. Improving the intestinal mucus, perhaps via parasites or medicines derived from parasites
  2. Improving the intestinal microbe balance
  3. Antibiotics that treat Map
  4. Anti-MAP vaccine similar to the one for Johne’s disease in cattle
  5. Eliminate map from the food supply

Here’s an article about the parasites and Crohn’s:

To determine how the worms could be our frenemies, Cadwell and colleagues tested mice with the same genetic defect found in many people with Crohn’s disease. Mucus-secreting cells in the intestines malfunction in the animals, reducing the amount of mucus that protects the gut lining from harmful bacteria. Researchers have also detected a change in the rodents’ microbiome, the natural microbial community in their guts. The abundance of one microbe, an inflammation-inducing bacterium in the Bacteroides group, soars in the mice with the genetic defect.

The researchers found that feeding the rodents one type of intestinal worm restored their mucus-producing cells to normal. At the same time, levels of two inflammation indicators declined in the animals’ intestines. In addition, the bacterial lineup in the rodents’ guts shifted, the team reports online today in Science. Bacteroides’s numbers plunged, whereas the prevalence of species in a different microbial group, the Clostridiales, increased. A second species of worm also triggers similar changes in the mice’s intestines, the team confirmed.

To check whether helminths cause the same effects in people, the scientists compared two populations in Malaysia: urbanites living in Kuala Lumpur, who harbor few intestinal parasites, and members of an indigenous group, the Orang Asli, who live in a rural area where the worms are rife. A type of Bacteroides, the proinflammatory microbes, predominated in the residents of Kuala Lumpur. It was rarer among the Orang Asli, where a member of the Clostridiales group was plentiful. Treating the Orang Asli with drugs to kill their intestinal worms reversed this pattern, favoring Bacteroides species over Clostridiales species, the team documented.

This sounds unethical unless they were merely tagging along with another team of doctors who were de-worming the Orangs for normal health reasons and didn’t intend on potentially inflicting Crohn’s on people. Nevertheless, it’s an interesting study.

At any rate, so far they haven’t managed to produce an effective medicine from parasites, possibly in part because people think parasites are icky.

But if parasites aren’t disgusting enough for you, there’s always the option of directly changing the gut bacteria: fecal microbiota transplants (FMT).  A fecal transplant is exactly what it sounds like: you take the regular feces out of the patient and put in new, fresh feces from an uninfected donor. (When your other option is pooping into a bag for the rest of your life because your colon was removed, swallowing a few poop pills doesn’t sound so bad.) EG, Fecal microbiota transplant for refractory Crohn’s:

Approximately one-third of patients with Crohn’s disease do not respond to conventional treatments, and some experience significant adverse effects, such as serious infections and lymphoma, and many patients require surgery due to complications. .. Herein, we present a patient with Crohn’s colitis in whom biologic therapy failed previously, but clinical remission and endoscopic improvement was achieved after a single fecal microbiota transplantation infusion.

Here’s a Chinese doctor who appears to have good success with FMTs to treat Crohn’s–improvement in 87% of patients one month after treatment and remission in 77%, though the effects may wear off over time. Note: even infliximab, considered a “wonder drug” for its amazing abilities, only works for about 50-75% of patients, must be administered via regular IV infusions for life (or until it stops working,) costs about $20,000 a year per patient, and has some serious side effects, like cancer. If fecal transplants can get the same results, that’s pretty good.

Little known fact: “In the United States, the Food and Drug Administration (FDA) has regulated human feces as an experimental drug since 2013.”

Antibiotics are another potential route. The Redhill Biopharma is conducting a phase III clinical study of antibiotics designed to fight MAP in Crohn’s patients. Redhill is expected to release some of their results in April.

A Crohn’s MAP vaccine trial is underway in healthy volunteers:

Mechanism of action: The vaccine is what is called a ‘T-cell’ vaccine. T-cells are a type of white blood cell -an important player in the immune system- in particular, for fighting against organisms that hide INSIDE the body’s cells –like MAP does. Many people are exposed to MAP but most don’t get Crohn’s –Why? Because their T-cells can ‘see’ and destroy MAP. In those who do get Crohn’s, the immune system has a ‘blind spot’ –their T-cells cannot see MAP. The vaccine works by UN-BLINDING the immune system to MAP, reversing the immune dysregulation and programming the body’s own T-cells to seek out and destroy cells containing MAP. For general information, there are two informative videos about T Cells and the immune system below.

Efficacy: In extensive tests in animals (in mice and in cattle), 2 shots of the vaccine spaced 8 weeks apart proved to be a powerful, long-lasting stimulant of immunity against MAP. To read the published data from the trial in mice, click here. To read the published data from the trial in cattle, click here.

Before: Fistula in the intestines, 31 year old Crohn’s patient–Dr Borody, Combining infliximab, anti-MAP and hyperbaric oxygen therapy for resistant fistulizing Crohn’s disease

Dr. Borody (who was influential in the discovery that ulcers are caused by the h. pylori bacteria and not stress,) has had amazing success treating Crohn’s patients with a combination of infliximab, anti-MAP antibiotics, and hyperbaric oxygen. Here are two of his before and after photos of the intestines of a 31 yr old Crohn’s sufferer:

Here are some more interesting articles on the subject:

Sources: Is Crohn’s Disease caused by a Mycobacterium? Comparisons with Tuberculosis, Leprosy, and Johne’s Disease.

What is MAP?

Researcher Finds Possible link Between Cattle and Human Diseases:

Last week, Davis and colleagues in the U.S. and India published a case report in Frontiers of Medicine . The report described a single patient, clearly infected with MAP, with the classic features of Johne’s disease in cattle, including the massive shedding of MAP in his feces. The patient was also ill with clinical features that were indistinguishable from the clinical features of Crohn’s. In this case though, a novel treatment approach cleared the patient’s infection.

The patient was treated with antibiotics known to be effective for tuberculosis, which then eliminated the clinical symptoms of Crohn’s disease, too.

After: The same intestines, now healed

Psychology Today: Treating Crohn’s Disease:

Through luck, hard work, good fortune, perseverance, and wonderful doctors, I seem to be one of the few people in the world who can claim to be “cured” of Crohn’s Disease. … In brief, I was treated for 6 years with medications normally used for multidrug resistant TB and leprosy, under the theory that a particular germ causes Crohn’s Disease. I got well, and have been entirely well since 2004. I do not follow a particular diet, and my recent colonoscopies and blood work have shown that I have no inflammation. The rest of these 3 blogs will explain more of the story.

What about removing Johne’s disease from the food supply? Assuming Johne’s is the culprit, this may be hard to do, (it’s pretty contagious in cattle, can lie dormant for years, and survives cooking) but drinking ultrapasteurized milk may be protective, especially for people who are susceptible to the disease.


However… there are also studies that contradict the MAP theory. For example, a recent study of the rate of Crohn’s disease in people exposed to Johne’s disease found no correllation. (However, Crohn’s is a pretty rare condition, and the survey only found 7 total cases, which is small enough that random chance could be a factor, but we are talking about people who probably got very up close and personal with feces infected with MAP.)

Another study found a negative correlation between Crohn’s and milk consumption:

Logistic regression showed no significant association with measures of potential contamination of water sources with MAP, water intake, or water treatment. Multivariate analysis showed that consumption of pasteurized milk (per kg/month: odds ratio (OR) = 0.82, 95% confidence interval (CI): 0.69, 0.97) was associated with a reduced risk of Crohn’s disease. Meat intake (per kg/month: OR = 1.40, 95% CI: 1.17, 1.67) was associated with a significantly increased risk of Crohn’s disease, whereas fruit consumption (per kg/month: OR = 0.78, 95% CI: 0.67, 0.92) was associated with reduced risk.

So even if Crohn’s is caused by MAP or something similar, it appears that people aren’t catching it from milk.

There are other theories about what causes Crohn’s–these folks, for example, think it’s related to consumption of GMO corn. Perhaps MAP has only been found in the intestines of Crohn’s patients because people with Crohn’s are really bad at fighting off infections. Perhaps the whole thing is caused by weird gut bacteria, or not enough parasites, insufficient Vitamin D, or industrial pollution.

The condition remains very much a mystery.

2 Interesting studies: Early Humans in SE Asia and Genetics, Relationships, and Mental Illness

Ancient Teeth Push Back Early Arrival of Humans in Southeast Asia :

New tests on two ancient teeth found in a cave in Indonesia more than 120 years ago have established that early modern humans arrived in Southeast Asia at least 20,000 years earlier than scientists previously thought, according to a new study. …

The findings push back the date of the earliest known modern human presence in tropical Southeast Asia to between 63,000 and 73,000 years ago. The new study also suggests that early modern humans could have made the crossing to Australia much earlier than the commonly accepted time frame of 60,000 to 65,000 years ago.

I would like to emphasize that nothing based on a couple of teeth is conclusive, “settled,” or “proven” science. Samples can get contaminated, machines make errors, people play tricks–in the end, we’re looking for the weight of the evidence.

I am personally of the opinion that there were (at least) two ancient human migrations into south east Asia, but only time will tell if I am correct.

Genome-wide association study of social relationship satisfaction: significant loci and correlations with psychiatric conditions, by Varun Warrier, Thomas Bourgeron, Simon Baron-Cohen:

We investigated the genetic architecture of family relationship satisfaction and friendship satisfaction in the UK Biobank. …

In the DSM-55, difficulties in social functioning is one of the criteria for diagnosing conditions such as autism, anorexia nervosa, schizophrenia, and bipolar disorder. However, little is known about the genetic architecture of social relationship satisfaction, and if social relationship dissatisfaction genetically contributes to risk for psychiatric conditions. …

We present the results of a large-scale genome-wide association study of social
relationship satisfaction in the UK Biobank measured using family relationship satisfaction and friendship satisfaction. Despite the modest phenotypic correlations, there was a significant and high genetic correlation between the two phenotypes, suggesting a similar genetic architecture between the two phenotypes.

Note: the two “phenotypes” here are “family relationship satisfaction” and “friendship satisfaction.”

We first investigated if the two phenotypes were genetically correlated with
psychiatric conditions. As predicted, most if not all psychiatric conditions had a significant negative correlation for the two phenotypes. … We observed significant negative genetic correlation between the two phenotypes and a large cross-condition psychiatric GWAS38. This underscores the importance of social relationship dissatisfaction in psychiatric conditions. …

In other words, people with mental illnesses generally don’t have a lot of friends nor get along with their families.

One notable exception is the negative genetic correlation between measures of cognition and the two phenotypes. Whilst subjective wellbeing is positively genetically correlated with measures of cognition, we identify a small but statistically significant negative correlation between measures of correlation and the two phenotypes.

Are they saying that smart people have fewer friends? Or that dumber people are happier with their friends and families? I think they are clouding this finding in intentionally obtuse language.

A recent study highlighted that people with very high IQ scores tend to report lower satisfaction with life with more frequent socialization.

Oh, I think I read that one. It’s not the socialization per se that’s the problem, but spending time away from the smart person’s intellectual activities. For example, I enjoy discussing the latest genetics findings with friends, but I don’t enjoy going on family vacations because they are a lot of work that does not involve genetics. (This is actually something my relatives complain about.)

…alleles that increase the risk for schizophrenia are in the same haplotype as
alleles that decrease friendship satisfaction. The functional consequences of this locus must be formally tested. …

Loss of function mutations in these genes lead to severe biochemical consequences, and are implicated in several neuropsychiatric conditions. For
example, de novo loss of function mutations in pLI intolerant genes confers significant risk for autism. Our results suggest that pLI > 0.9 genes contribute to psychiatric risk through both common and rare genetic variation.

Two Exciting Papers on African Genetics

I loved that movie
Nǃxau ǂToma, (aka Gcao Tekene Coma,) Bushman star of “The Gods Must be Crazy,” roughly 1944-2003

An interesting article on Clues to Africa’s Mysterious Past appeared recently in the NY Times:

It was only two years ago that researchers found the first ancient human genome in Africa: a skeleton in a cave in Ethiopia yielded DNA that turned out to be 4,500 years old.

On Thursday, an international team of scientists reported that they had recovered far older genes from bone fragments in Malawi dating back 8,100 years. The researchers also retrieved DNA from 15 other ancient people in eastern and southern Africa, and compared the genes to those of living Africans.

Let’s skip to the article, Reconstructing Prehistoric African Population Structure by Skoglund et al:

We assembled genome-wide data from 16 prehistoric Africans. We show that the anciently divergent lineage that comprises the primary ancestry of the southern African San had a wider distribution in the past, contributing approximately two-thirds of the ancestry of Malawi hunter-gatherers ∼8,100–2,500 years ago and approximately one-third of the ancestry of Tanzanian hunter-gatherers ∼1,400 years ago.

Paths of the great Bantu Migration

The San are also known as the Bushmen, a famous group of recent hunter-gatherers from southern Africa.

We document how the spread of farmers from western Africa involved complete replacement of local hunter-gatherers in some regions…

This is most likely the Great Bantu Migration, which I wrote about in Into Africa: the Great Bantu Migration.

…and we track the spread of herders by showing that the population of a ∼3,100-year-old pastoralist from Tanzania contributed ancestry to people from northeastern to southern Africa, including a ∼1,200-year-old southern African pastoralist…

Whereas the two individuals buried in ∼2,000 BP hunter-gatherer contexts in South Africa share ancestry with southern African Khoe-San populations in the PCA, 11 of the 12 ancient individuals who lived in eastern and south-central Africa between ∼8,100 and ∼400 BP form a gradient of relatedness to the eastern African Hadza on the one hand and southern African Khoe-San on the other (Figure 1A).

The Hadza are a hunter-gatherer group from Tanzania who are not obviously related to any other people. Their language has traditionally been classed alongside the languages of the KhoiSan/Bushmen people because they all contain clicks, but the languages otherwise have very little in common and Hadza appears to be a language isolate, like Basque.

The genetic cline correlates to geography, running along a north-south axis with ancient individuals from Ethiopia (∼4,500 BP), Kenya (∼400 BP), Tanzania (both ∼1,400 BP), and Malawi (∼8,100–2,500 BP), showing increasing affinity to southern Africans (both ancient individuals and present-day Khoe-San). The seven individuals from Malawi show no clear heterogeneity, indicating a long-standing and distinctive population in ancient Malawi that persisted for at least ∼5,000 years (the minimum span of our radiocarbon dates) but which no longer exists today. …

We find that ancestry closely related to the ancient southern Africans was present much farther north and east in the past than is apparent today. This ancient southern African ancestry comprises up to 91% of the ancestry of Khoe-San groups today (Table S5), and also 31% ± 3% of the ancestry of Tanzania_Zanzibar_1400BP, 60% ± 6% of the ancestry of Malawi_Fingira_6100BP, and 65% ± 3% of the ancestry of Malawi_Fingira_2500BP (Figure 2A). …

Both unsupervised clustering (Figure 1B) and formal ancestry estimation (Figure 2B) suggest that individuals from the Hadza group in Tanzania can be modeled as deriving all their ancestry from a lineage related deeply to ancient eastern Africans such as the Ethiopia_4500BP individual …

So what’s up with the Tanzanian expansion mentioned in the summary?

Western-Eurasian-related ancestry is pervasive in eastern Africa today … and the timing of this admixture has been estimated to be ∼3,000 BP on average… We found that the ∼3,100 BP individual… associated with a Savanna Pastoral Neolithic archeological tradition, could be modeled as having 38% ± 1% of her ancestry related to the nearly 10,000-year-old pre-pottery farmers of the Levant These results could be explained by migration into Africa from descendants of pre-pottery Levantine farmers or alternatively by a scenario in which both pre-pottery Levantine farmers and Tanzania_Luxmanda_3100BP descend from a common ancestral population that lived thousands of years earlier in Africa or the Near East. We fit the remaining approximately two-thirds of Tanzania_Luxmanda_3100BP as most closely related to the Ethiopia_4500BP…

…present-day Cushitic speakers such as the Somali cannot be fit simply as having Tanzania_Luxmanda_3100BP ancestry. The best fitting model for the Somali includes Tanzania_Luxmanda_3100BP ancestry, Dinka-related ancestry, and 16% ± 3% Iranian-Neolithic-related ancestry (p = 0.015). This suggests that ancestry related to the Iranian Neolithic appeared in eastern Africa after earlier gene flow related to Levant Neolithic populations, a scenario that is made more plausible by the genetic evidence of admixture of Iranian-Neolithic-related ancestry throughout the Levant by the time of the Bronze Age …and in ancient Egypt by the Iron Age …

There is then a discussion of possible models of ancient African population splits (were the Bushmen the first? How long have they been isolated?) I suspect the more ancient African DNA we uncover, the more complicated the tree will become, just as in Europe and Asia we’ve discovered Neanderthal and Denisovan admixture.

They also compared genomes to look for genetic adaptations and found evidence for selection for taste receptors and “response to radiation” in the Bushmen, which the authors note “could be due to exposure to sunlight associated with the life of the ‡Khomani and Ju|’hoan North people in the Kalahari Basin, which has become a refuge for hunter-gatherer populations in the last millenia due to encroachment by pastoralist and agriculturalist groups.”

(The Bushmen are lighter than Bantus, with a more golden or tan skin tone.)

They also found evidence of selection for short stature among the Pygmies (which isn’t really surprising to anyone, unless you thought they had acquired their heights by admixture with another very short group of people.)

Overall, this is a great paper and I encourage you to RTWT, especially the pictures/graphs.

Now, if that’s not enough African DNA for you, we also have Loci Associated with Skin Pigmentation Identified in African Populations, by Crawford et al:

Examining ethnically diverse African genomes, we identify variants in or near SLC24A5, MFSD12, DDB1, TMEM138, OCA2 and HERC2 that are significantly associated with skin pigmentation. Genetic evidence indicates that the light pigmentation variant at SLC24A5 was introduced into East Africa by gene flow from non-Africans. At all other loci, variants associated with dark pigmentation in Africans are identical by descent in southern Asian and Australo-Melanesian populations. Functional analyses indicate that MFSD12 encodes a lysosomal protein that affects melanogenesis in zebrafish and mice, and that mutations in melanocyte-specific regulatory regions near DDB1/TMEM138 correlate with expression of UV response genes under selection in Eurasians.

I’ve had an essay on the evolution of African skin tones sitting in my draft folder for ages because this research hadn’t been done. There’s plenty of research on European and Asian skin tones (skin appears to have significantly lightened around 10,000 years ago in Europeans,) but much less on Africans. Luckily for me, this paper fixes that.

Looks like SLC24A5 is related to that Levantine/Iranian back-migration into Africa documented in the first paper.

The Negritos of Sundaland, Sahul, and the Philippines

Ati (Negrito) woman from the Philippines

The Negritos are a fascinating group of short-statured, dark-skinned, frizzy-haired peoples from southeast Asia–chiefly the Andaman Islands, Malaysia, Philippines, and Thailand. (Spelling note: “Negritoes” is also an acceptable plural, and some sources use the Spanish Negrillos.)

Because of their appearance, they have long been associated with African peoples, especially the Pygmies. Pygmies are formally defined as any group where adult men are, on average 4’11” or less and is almost always used specifically to refer to African Pygmies; the term pygmoid is sometimes used for groups whose men average 5’1″ or below, including the Negritos. (Some of the Bushmen tribes, Bolivians, Amazonians, the remote Taron, and a variety of others may also be pygmoid, by this definition.)

However, genetic testing has long indicated that they, along with other Melanesians and Australian Aborigines, are more closely related to other east Asian peoples than any African groups. In other words, they’re part of the greater Asian race, albeit a distant branch of it.

But how distant? And are the various Negrito groups closely related to each other, or do there just happen to be a variety of short groups of people in the area, perhaps due to convergent evolution triggered by insular dwarfism?

From Wikimedia

In Discerning the origins of the Negritos, First Sundaland Peoples: deep divergence and archaic admixture, Jinam et al gathered genetic data from Filipino, Malaysian, and Andamanese Negrito populations, and compared them both to each other and other Asian, African, and European groups. (Be sure to download the supplementary materials to get all of the graphs and maps.)

They found that the Negrito groups they studied “are basal to other East and Southeast Asians,” (basal: forming the bottom layer or base. In this case, it means they split off first,) “and that they diverged from West Eurasians at least 38,000 years ago.” (West Eurasians: Caucasians, consisting of Europeans, Middle Easterners, North Africans, and people from India.) “We also found relatively high traces of Denisovan admixture in the Philippine Negritos, but not in the Malaysian and Andamanese groups.” (Denisovans are a group of extinct humans similar to Neanderthals, but we’ve yet to find many of their bones. Just as Neanderthal DNA shows up in non-Sub-Saharan-Africans, so Denisvoan shows up in Melanesians.)

Figure 1 (A) shows PC analysis of Andamanese, Malaysian, and Philippine Negritos, revealing three distinct clusters:

In the upper right-hand corner, the Aeta, Agta, Batak, and Mamanwa are Philippine Negritos. The Manobo are non-Negrito Filipinos.

In the lower right-hand corner are the Jehai, Kintak and Batek are Malaysian Negritos.

And in the upper left, we have the extremely isolated Andamanese Onge and Jarawa Negritos.

(Phil-NN and Mly-NN I believe are Filipino and Malaysian Non-Negritos.)

You can find the same chart, but flipped upside down, with Papuan and Melanesian DNA in the supplemental materials. Of the three groups, they cluster closest to the Philippine Negritos, along the same line with the Malaysians.

By excluding the Andamanese (and Kintak) Negritos, Figure 1 (B) allows a closer look at the structure of the Philippine Negritos.

The Agta, Aeta, and Batak form a horizontal “comet-like pattern,” which likely indicates admixture with non-Negrito Philipine groups like the Manobo. The Mamanawa, who hail from a different part of the Philippines, also show this comet-like patterns, but along a different axis–likely because they intermixed with the different Filipinos who lived in their area. As you can see, there’s a fair amount of overlap–several of the Manobo individuals clustered with the Mamanwa Negritos, and the Batak cluster near several non-Negrito groups (see supplemental chart S4 B)–suggesting high amounts of mixing between these groups.

ADMIXTURE analysis reveals a similar picture. The non-Negrito Filipino groups show up primarily as Orange. The Aeta, Agta, and Batak form a clear genetic cluster with each other and cline with the Orange Filipinos, with the Aeta the least admixed and Batak the most.

The white are on the chart isn’t a data error, but the unique signature of the geographically separated Mananwa, who are highly mixed with the Manobo–and the Manobo, in turn, are mixed with them.

But this alone doesn’t tell us how ancient these populations are, nor if they’re descended from one ancestral pop. For this, the authors constructed several phylogenetic trees, based on all of the data at hand and assuming from 0 – 5 admixture events. The one on the left assumes 5 events, but for clarity only shows three of them. The Denisovan DNA is fascinating and well-documented elsewhere in Melanesian populatons; that Malaysian and Philippine Negritos mixed with their neighbors is also known, supporting the choice of this tree as the most likely to be accurate.

Regardless of which you pick, all of the trees show very similar results, with the biggest difference being whether the Melanesians/Papuans split before or after the Andamanese/Malaysian Negritos.

In case you are unfamiliar with these trees, I’ll run down a quick explanation: This is a human family tree, with each split showing where one group of humans split off from the others and became an isolated group with its own unique genetic patterns. The orange and red lines mark places where formerly isolated groups met and interbred, producing children that are a mix of both. The first split in the tree, going back million of years, is between all Homo sapiens (our species) and the Denisovans, a sister species related to the Neanderthals.

All humans outside of sub-Saharan Africans have some Neanderthal DNA because their ancestors met and interbred with Neanderthals on their way Out of Africa. Melanesians, Papuans, and some Negritos also have some Denisovan DNA, because their ancestors met and made children with members of this obscure human species, but Denisovan DNA is quite rare outside these groups.

Here is a map of Denisovan DNA levels the authors found, with 4% of Papuan DNA hailing from Denisivan ancestors, and Aeta nearly as high. By contrast, the Andamanese Negritos appear to have zero Denisovan. Either the Andamanese split off before the ancestors of the Philippine Negritos and Papuans met the Denisovans, or all Denisovan DNA has been purged from their bloodlines, perhaps because it just wasn’t helpful for surviving on their islands.

Back to the Tree: The second node is where the Biaka, a group of Pygmies from the Congo Rainforest in central Africa. Pygmy lineages are among the most ancient on earth, potentially going back over 200,000 years, well before any Homo sapiens had left Africa.

The next group that splits off from the rest of humanity are the Yoruba, a single ethnic group chosen to stand in for the entirety of the Bantus. Bantus are the group that you most likely think of when you think of black Africans, because over the past three millennia they have expanded greatly and conquered most of sub-Saharan Africa.

Next we have the Out of Africa event and the split between Caucasians (here represented by the French) and the greater Asian clade, which includes Australian Aborigines, Melanesians, Polynesians, Chinese, Japanese, Siberians, Inuit, and Native Americans.

The first groups to split off from the greater Asian clade (aka race) were the Andamanese and Malaysian Negritos, followed by the Papuans/Melanesians Australian Aborigines are closely related to Papuans, as Australia and Papua New Guinea were connected in a single continent (called Sahul) back during the last Ice Age. Most of Indonesia and parts of the Philippines were also connected into a single landmass, called Sunda. Sensibly, people reached Sunda before Sahul, though (Perhaps at that time the Andaman islands, to the northwest of Sumatra, were also connected or at least closer to the mainland.)

Irrespective of the exact order in which Melanesians and individual Negrito groups split off, they all split well before all of the other Asian groups in the area.

This is supported by legends told by the Filipinos themselves:

Legends, such as those involving the Ten Bornean Datus and the Binirayan Festival, tell tales about how, at the beginning of the 12th century when Indonesia and Philippines were under the rule of Indianized native kingdoms, the ancestors of the Bisaya escaped from Borneo from the persecution of Rajah Makatunaw. Led by Datu Puti and Datu Sumakwel and sailing with boats called balangays, they landed near a river called Suaragan, on the southwest coast of Panay, (the place then known as Aninipay), and bartered the land from an Ati [Negrito] headman named Polpolan and his son Marikudo for the price of a necklace and one golden salakot. The hills were left to the Atis while the plains and rivers to the Malays. This meeting is commemorated through the Ati-atihan festival.[4]

The study’s authors estimate that the Negritos split from Europeans (Caucasians) around 30-38,000 years ago, and that the Malaysian and Philippine Negritos split around
13-15,000 years ago. (This all seems a bit tentative, IMO, especially since we have physical evidence of people in the area going back much further than that, and the authors themselves admit in the discussion that their time estimate may be too short.)

The authors also note:

Both our NJ (fig. 3A) and UPGMA (supplementary fig. S10) trees show that after divergence from Europeans, the ancestral Asians subsequently split into Papuans, Negritos and East Asians, implying a one-wave colonization of Asia. … This is in contrast to the study based on whole genome sequences that suggested Australian Aboriginal/Papuan first split from European/East Asians 60 kya, and later Europeans and East Asians diverged 40 kya (Malaspinas et al. 2016). This implies a two-wave migration into Asia…

The matter is still up for debate/more study.

Negrito couple from the Andaman Islands

In conclusion: All of the Negrito groups are likely descended from a common ancestor, (rather than having evolved from separate groups that happened to develop similar body types due to exposure to similar environments,) and were among the very first inhabitants of their regions. Despite their short stature, they are more closely related to other Asian groups (like the Chinese) than to African Pygmies. Significant mixing with their neighbors, however, is quickly obscuring their ancient lineages.

I wonder if all ancient human groups were originally short, and height a recently evolved trait in some groups?

In closing, I’d like to thank Jinam et al for their hard work in writing this article and making it available to the public, their sponsors, and the unique Negrito peoples themselves for surviving so long.

Evolution is slow–until it’s fast: Genetic Load and the Future of Humanity

Source: Priceonomics

A species may live in relative equilibrium with its environment, hardly changing from generation to generation, for millions of years. Turtles, for example, have barely changed since the Cretaceous, when dinosaurs still roamed the Earth.

But if the environment changes–critically, if selective pressures change–then the species will change, too. This was most famously demonstrated with English moths, which changed color from white-and-black speckled to pure black when pollution darkened the trunks of the trees they lived on. To survive, these moths need to avoid being eaten by birds, so any moth that stands out against the tree trunks tends to get turned into an avian snack. Against light-colored trees, dark-colored moths stood out and were eaten. Against dark-colored trees, light-colored moths stand out.

This change did not require millions of years. Dark-colored moths were virtually unknown in 1810, but by 1895, 98% of the moths were black.

The time it takes for evolution to occur depends simply on A. The frequency of a trait in the population and B. How strongly you are selecting for (or against) it.

Let’s break this down a little bit. Within a species, there exists a great deal of genetic variation. Some of this variation happens because two parents with different genes get together and produce offspring with a combination of their genes. Some of this variation happens because of random errors–mutations–that occur during copying of the genetic code. Much of the “natural variation” we see today started as some kind of error that proved to be useful, or at least not harmful. For example, all humans originally had dark skin similar to modern Africans’, but random mutations in some of the folks who no longer lived in Africa gave them lighter skin, eventually producing “white” and “Asian” skin tones.

(These random mutations also happen in Africa, but there they are harmful and so don’t stick around.)

Natural selection can only act on the traits that are actually present in the population. If we tried to select for “ability to shoot x-ray lasers from our eyes,” we wouldn’t get very far, because no one actually has that mutation. By contrast, albinism is rare, but it definitely exists, and if for some reason we wanted to select for it, we certainly could. (The incidence of albinism among the Hopi Indians is high enough–1 in 200 Hopis vs. 1 in 20,000 Europeans generally and 1 in 30,000 Southern Europeans–for scientists to discuss whether the Hopi have been actively selecting for albinism. This still isn’t a lot of albinism, but since the American Southwest is not a good environment for pale skin, it’s something.)

You will have a much easier time selecting for traits that crop up more frequently in your population than traits that crop up rarely (or never).

Second, we have intensity–and variety–of selective pressure. What % of your population is getting removed by natural selection each year? If 50% of your moths get eaten by birds because they’re too light, you’ll get a much faster change than if only 10% of moths get eaten.

Selection doesn’t have to involve getting eaten, though. Perhaps some of your moths are moth Lotharios, seducing all of the moth ladies with their fuzzy antennae. Over time, the moth population will develop fuzzier antennae as these handsome males out-reproduce their less hirsute cousins.

No matter what kind of selection you have, nor what part of your curve it’s working on, all that ultimately matters is how many offspring each individual has. If white moths have more children than black moths, then you end up with more white moths. If black moths have more babies, then you get more black moths.


So what happens when you completely remove selective pressures from a population?

Back in 1968, ethologist John B. Calhoun set up an experiment popularly called “Mouse Utopia.” Four pairs of mice were given a large, comfortable habitat with no predators and plenty of food and water.

Predictably, the mouse population increased rapidly–once the mice were established in their new homes, their population doubled every 55 days. But after 211 days of explosive growth, reproduction began–mysteriously–to slow. For the next 245 days, the mouse population doubled only once every 145 days.

The birth rate continued to decline. As births and death reached parity, the mouse population stopped growing. Finally the last breeding female died, and the whole colony went extinct.


As I’ve mentioned before Israel is (AFAIK) the only developed country in the world with a TFR above replacement.

It has long been known that overcrowding leads to population stress and reduced reproduction, but overcrowding can only explain why the mouse population began to shrink–not why it died out. Surely by the time there were only a few breeding pairs left, things had become comfortable enough for the remaining mice to resume reproducing. Why did the population not stabilize at some comfortable level?

Professor Bruce Charlton suggests an alternative explanation: the removal of selective pressures on the mouse population resulted in increasing mutational load, until the entire population became too mutated to reproduce.

What is genetic load?

As I mentioned before, every time a cell replicates, a certain number of errors–mutations–occur. Occasionally these mutations are useful, but the vast majority of them are not. About 30-50% of pregnancies end in miscarriage (the percent of miscarriages people recognize is lower because embryos often miscarry before causing any overt signs of pregnancy,) and the majority of those miscarriages are caused by genetic errors.

Unfortunately, randomly changing part of your genetic code is more likely to give you no skin than skintanium armor.

But only the worst genetic problems that never see the light of day. Plenty of mutations merely reduce fitness without actually killing you. Down Syndrome, famously, is caused by an extra copy of chromosome 21.

While a few traits–such as sex or eye color–can be simply modeled as influenced by only one or two genes, many traits–such as height or IQ–appear to be influenced by hundreds or thousands of genes:

Differences in human height is 60–80% heritable, according to several twin studies[19] and has been considered polygenic since the Mendelian-biometrician debate a hundred years ago. A genome-wide association (GWA) study of more than 180,000 individuals has identified hundreds of genetic variants in at least 180 loci associated with adult human height.[20] The number of individuals has since been expanded to 253,288 individuals and the number of genetic variants identified is 697 in 423 genetic loci.[21]

Obviously most of these genes each plays only a small role in determining overall height (and this is of course holding environmental factors constant.) There are a few extreme conditions–gigantism and dwarfism–that are caused by single mutations, but the vast majority of height variation is caused by which particular mix of those 700 or so variants you happen to have.

The situation with IQ is similar:

Intelligence in the normal range is a polygenic trait, meaning it’s influenced by more than one gene.[3][4]

The general figure for the heritability of IQ, according to an authoritative American Psychological Association report, is 0.45 for children, and rises to around 0.75 for late teens and adults.[5][6] In simpler terms, IQ goes from being weakly correlated with genetics, for children, to being strongly correlated with genetics for late teens and adults. … Recent studies suggest that family and parenting characteristics are not significant contributors to variation in IQ scores;[8] however, poor prenatal environment, malnutrition and disease can have deleterious effects.[9][10]

And from a recent article published in Nature Genetics, Genome-wide association meta-analysis of 78,308 individuals identifies new loci and genes influencing human intelligence:

Despite intelligence having substantial heritability2 (0.54) and a confirmed polygenic nature, initial genetic studies were mostly underpowered3, 4, 5. Here we report a meta-analysis for intelligence of 78,308 individuals. We identify 336 associated SNPs (METAL P < 5 × 10−8) in 18 genomic loci, of which 15 are new. Around half of the SNPs are located inside a gene, implicating 22 genes, of which 11 are new findings. Gene-based analyses identified an additional 30 genes (MAGMA P < 2.73 × 10−6), of which all but one had not been implicated previously. We show that the identified genes are predominantly expressed in brain tissue, and pathway analysis indicates the involvement of genes regulating cell development (MAGMA competitive P = 3.5 × 10−6). Despite the well-known difference in twin-based heritability2 for intelligence in childhood (0.45) and adulthood (0.80), we show substantial genetic correlation (rg = 0.89, LD score regression P = 5.4 × 10−29). These findings provide new insight into the genetic architecture of intelligence.

The greater number of genes influence a trait, the harder they are to identify without extremely large studies, because any small group of people might not even have the same set of relevant genes.

High IQ correlates positively with a number of life outcomes, like health and longevity, while low IQ correlates with negative outcomes like disease, mental illness, and early death. Obviously this is in part because dumb people are more likely to make dumb choices which lead to death or disease, but IQ also correlates with choice-free matters like height and your ability to quickly press a button. Our brains are not some mysterious entities floating in a void, but physical parts of our bodies, and anything that affects our overall health and physical functioning is likely to also have an effect on our brains.

Like height, most of the genetic variation in IQ is the combined result of many genes. We’ve definitely found some mutations that result in abnormally low IQ, but so far we have yet (AFAIK) to find any genes that produce the IQ gigantism. In other words, low (genetic) IQ is caused by genetic load–Small Yet Important Genetic Differences Between Highly Intelligent People and General Population:

The study focused, for the first time, on rare, functional SNPs – rare because previous research had only considered common SNPs and functional because these are SNPs that are likely to cause differences in the creation of proteins.

The researchers did not find any individual protein-altering SNPs that met strict criteria for differences between the high-intelligence group and the control group. However, for SNPs that showed some difference between the groups, the rare allele was less frequently observed in the high intelligence group. This observation is consistent with research indicating that rare functional alleles are more often detrimental than beneficial to intelligence.

Maternal mortality rates over time, UK data

Greg Cochran has some interesting Thoughts on Genetic Load. (Currently, the most interesting candidate genes for potentially increasing IQ also have terrible side effects, like autism, Tay Sachs and Torsion Dystonia. The idea is that–perhaps–if you have only a few genes related to the condition, you get an IQ boost, but if you have too many, you get screwed.) Of course, even conventional high-IQ has a cost: increased maternal mortality (larger heads).

Wikipedia defines genetic load as:

the difference between the fitness of an average genotype in a population and the fitness of some reference genotype, which may be either the best present in a population, or may be the theoretically optimal genotype. … Deleterious mutation load is the main contributing factor to genetic load overall.[5] Most mutations are deleterious, and occur at a high rate.

There’s math, if you want it.

Normally, genetic mutations are removed from the population at a rate determined by how bad they are. Really bad mutations kill you instantly, and so are never born. Slightly less bad mutations might survive, but never reproduce. Mutations that are only a little bit deleterious might have no obvious effect, but result in having slightly fewer children than your neighbors. Over many generations, this mutation will eventually disappear.

(Some mutations are more complicated–sickle cell, for example, is protective against malaria if you have only one copy of the mutation, but gives you sickle cell anemia if you have two.)

Jakubany is a town in the Carpathian Mountains

Throughout history, infant mortality was our single biggest killer. For example, here is some data from Jakubany, a town in the Carpathian Mountains:

We can see that, prior to the 1900s, the town’s infant mortality rate stayed consistently above 20%, and often peaked near 80%.

The graph’s creator states:

When I first ran a calculation of the infant mortality rate, I could not believe certain of the intermediate results. I recompiled all of the data and recalculated … with the same astounding result – 50.4% of the children born in Jakubany between the years 1772 and 1890 would diebefore reaching ten years of age! …one out of every two! Further, over the same 118 year period, of the 13306 children who were born, 2958 died (~22 %) before reaching the age of one.

Historical infant mortality rates can be difficult to calculate in part because they were so high, people didn’t always bother to record infant deaths. And since infants are small and their bones delicate, their burials are not as easy to find as adults’. Nevertheless, Wikipedia estimates that Paleolithic man had an average life expectancy of 33 years:

Based on the data from recent hunter-gatherer populations, it is estimated that at 15, life expectancy was an additional 39 years (total 54), with a 0.60 probability of reaching 15.[12]

Priceonomics: Why life expectancy is misleading

In other words, a 40% chance of dying in childhood. (Not exactly the same as infant mortality, but close.)

Wikipedia gives similarly dismal stats for life expectancy in the Neolithic (20-33), Bronze and Iron ages (26), Classical Greece(28 or 25), Classical Rome (20-30), Pre-Columbian Southwest US (25-30), Medieval Islamic Caliphate (35), Late Medieval English Peerage (30), early modern England (33-40), and the whole world in 1900 (31).

Over at ThoughtCo: Surviving Infancy in the Middle Ages, the author reports estimates for between 30 and 50% infant mortality rates. I recall a study on Anasazi nutrition which I sadly can’t locate right now, which found 100% malnutrition rates among adults (based on enamel hypoplasias,) and 50% infant mortality.

As Priceonomics notes, the main driver of increasing global life expectancy–48 years in 1950 and 71.5 years in 2014 (according to Wikipedia)–has been a massive decrease in infant mortality. The average life expectancy of an American newborn back in 1900 was only 47 and a half years, whereas a 60 year old could expect to live to be 75. In 1998, the average infant could expect to live to about 75, and the average 60 year old could expect to live to about 80.

Back in his post on Mousetopia, Charlton writes:

Michael A Woodley suggests that what was going on [in the Mouse experiment] was much more likely to be mutation accumulation; with deleterious (but non-fatal) genes incrementally accumulating with each generation and generating a wide range of increasingly maladaptive behavioural pathologies; this process rapidly overwhelming and destroying the population before any beneficial mutations could emerge to ‘save; the colony from extinction. …

The reason why mouse utopia might produce so rapid and extreme a mutation accumulation is that wild mice naturally suffer very high mortality rates from predation. …

Thus mutation selection balance is in operation among wild mice, with very high mortality rates continually weeding-out the high rate of spontaneously-occurring new mutations (especially among males) – with typically only a small and relatively mutation-free proportion of the (large numbers of) offspring surviving to reproduce; and a minority of the most active and healthy (mutation free) males siring the bulk of each generation.

However, in Mouse Utopia, there is no predation and all the other causes of mortality (eg. Starvation, violence from other mice) are reduced to a minimum – so the frequent mutations just accumulate, generation upon generation – randomly producing all sorts of pathological (maladaptive) behaviours.

Historically speaking, another selective factor operated on humans: while about 67% of women reproduced, only 33% of men did. By contrast, according to Psychology Today, a majority of today’s men have or will have children.

Today, almost everyone in the developed world has plenty of food, a comfortable home, and doesn’t have to worry about dying of bubonic plague. We live in humantopia, where the biggest factor influencing how many kids you have is how many you want to have.


Back in 1930, infant mortality rates were highest among the children of unskilled manual laborers, and lowest among the children of professionals (IIRC, this is Brittish data.) Today, infant mortality is almost non-existent, but voluntary childlessness has now inverted this phenomena:

Yes, the percent of childless women appears to have declined since 1994, but the overall pattern of who is having children still holds. Further, while only 8% of women with post graduate degrees have 4 or more children, 26% of those who never graduated from highschool have 4+ kids. Meanwhile, the age of first-time moms has continued to climb.

In other words, the strongest remover of genetic load–infant mortality–has all but disappeared; populations with higher load (lower IQ) are having more children than populations with lower load; and everyone is having children later, which also increases genetic load.

Take a moment to consider the high-infant mortality situation: an average couple has a dozen children. Four of them, by random good luck, inherit a good combination of the couple’s genes and turn out healthy and smart. Four, by random bad luck, get a less lucky combination of genes and turn out not particularly healthy or smart. And four, by very bad luck, get some unpleasant mutations that render them quite unhealthy and rather dull.

Infant mortality claims half their children, taking the least healthy. They are left with 4 bright children and 2 moderately intelligent children. The three brightest children succeed at life, marry well, and end up with several healthy, surviving children of their own, while the moderately intelligent do okay and end up with a couple of children.

On average, society’s overall health and IQ should hold steady or even increase over time, depending on how strong the selective pressures actually are.

Or consider a consanguineous couple with a high risk of genetic birth defects: perhaps a full 80% of their children die, but 20% turn out healthy and survive.

Today, by contrast, your average couple has two children. One of them is lucky, healthy, and smart. The other is unlucky, unhealthy, and dumb. Both survive. The lucky kid goes to college, majors in underwater intersectionist basket-weaving, and has one kid at age 40. That kid has Down Syndrome and never reproduces. The unlucky kid can’t keep a job, has chronic health problems, and 3 children by three different partners.

Your consanguineous couple migrates from war-torn Somalia to Minnesota. They still have 12 kids, but three of them are autistic with IQs below the official retardation threshold. “We never had this back in Somalia,” they cry. “We don’t even have a word for it.”

People normally think of dysgenics as merely “the dumb outbreed the smart,” but genetic load applies to everyone–men and women, smart and dull, black and white, young and especially old–because we all make random transcription errors when copying our DNA.

I could offer a list of signs of increasing genetic load, but there’s no way to avoid cherry-picking trends I already know are happening, like falling sperm counts or rising (diagnosed) autism rates, so I’ll skip that. You may substitute your own list of “obvious signs society is falling apart at the genes” if you so desire.

Nevertheless, the transition from 30% (or greater) infant mortality to almost 0% is amazing, both on a technical level and because it heralds an unprecedented era in human evolution. The selective pressures on today’s people are massively different from those our ancestors faced, simply because our ancestors’ biggest filter was infant mortality. Unless infant mortality acted completely at random–taking the genetically loaded and unloaded alike–or on factors completely irrelevant to load, the elimination of infant mortality must continuously increase the genetic load in the human population. Over time, if that load is not selected out–say, through more people being too unhealthy to reproduce–then we will end up with an increasing population of physically sick, maladjusted, mentally ill, and low-IQ people.

(Remember, all mental traits are heritable–so genetic load influences everything, not just controversial ones like IQ.)

If all of the above is correct, then I see only 4 ways out:

  1. Do nothing: Genetic load increases until the population is non-functional and collapses, resulting in a return of Malthusian conditions, invasion by stronger neighbors, or extinction.
  2. Sterilization or other weeding out of high-load people, coupled with higher fertility by low-load people
  3. Abortion of high load fetuses
  4. Genetic engineering

#1 sounds unpleasant, and #2 would result in masses of unhappy people. We don’t have the technology for #4, yet. I don’t think the technology is quite there for #2, either, but it’s much closer–we can certainly test for many of the deleterious mutations that we do know of.

The People Who Went Down the Rivers: Origin of the Sino-Tibetan Language Family

I recently received a question from Quas Lacrimas:

“What (if anything) do you make of the fact that Proto-Tibetan and Proto-Sinitic were sister languages, but Tibetans and Han are so genetically disparate?”

My first response was that, assuming the question itself was correct, then one group must have conquered the other group, imparting its language but not its DNA.

On further reflection, though, I decided it’d be best to check whether the question’s initial premises were correct.

Sino-Tibetan, it turns out, is a legit language family:

The Sino-Tibetan languages, in a few sources also known as Tibeto-Burman or Trans-Himalayan, are a family of more than 400 languages spoken in East Asia, Southeast Asia and South Asia. The family is second only to the Indo-European languages in terms of the number of native speakers. The Sino-Tibetan languages with the most native speakers are the varieties of Chinese (1.3 billion speakers), Burmese (33 million) and the Tibetic languages (8 million). Many Sino-Tibetan languages are spoken by small communities in remote mountain areas and as such are poorly documented.

Map of the Sino-Tibetan language family
Red: Chinese; Yellow: Tibetan; Brown: Karen; Green: Lolo-Burmese; Orange: Other

But the claim that Tibetans and Chinese people are genetically disparate looks more questionable. While the Wikipedia page on Sino-Tibetan claims that, “There is no ethnic unity among the many peoples who speak Sino-Tibetan languages,” in the next two sentences it also claims that, “The most numerous are the Han Chinese, numbering 1.4+ billion(in China alone). The Hui (10 million) also speak Chinese but are officially classified as ethnically distinct by the Chinese government.”

But the Chinese government claiming that a group is an official ethnic group doesn’t make it a genetic group. “Hui” just means Muslim, and Muslims of any genetic background can get lumped into the group. I actually read some articles about the Hui ages ago, and as far as I recall, the category didn’t really exist in any official way prior to the modern PRC declaring that it did for census purposes. Today (or recently) there are some special perks for being an ethnic minority in China, like exceptions to the one-child policy, which lead more people to embrace their “Hui” identity and start thinking about themselves in this pan-Chinese-Muslim way rather than in terms of their local ethnic group, but none of this is genetics.

So right away I am suspicious that this claim is more “these groups see themselves as different” than “they are genetically different.” And I totally agree that Tibetan people and Chinese people are culturally distinct and probably see themselves as different groups.

For genetics, let’s turn back to Haak et al’s representation of global genetics:

Haak et all’s full dataset










Just in case you’re new around here, the part dominated by bright blue is sub-Saharan Africans, the yellow is Asians, and the orange is Caucasians. I’ve made a map to make it easier to visualize the distribution of these groups:

Asian, Australian, and Melanesian ethic groups (including Indian, Middle Eastern, and Chinese) from Haak et al’s dataset

This dataset doesn’t have a Tibetan group, but it does have the Nepalese Kusunda, Mongolic Tu (a Mongolic-language speaking people in China), and the Burmese Lahu. So it’s a start.

The first thing that jumps out at me is that the groups in the Sino-Tibetan language family do not look all that genetically distinct, at least not on a global scale. They’re more similar than Middle Easterners and Europeans, despite the fact that Anatolian farmers invaded Europe several thousand years ago.

The Wikipedia page on Sino-Tibetan notes:

J. A. Matisoff proposed that the urheimat of the Sino-Tibetan languages was around the upper reaches of the Yangtze, Brahmaputra, Salween, and Mekong. This view is in accordance with the hypothesis that bubonic plague, cholera, and other diseases made the easternmost foothills of the Himalayas between China and India difficult for people outside to migrate in but relatively easily for the indigenous people, who had been adapted to the environment, to migrate out.[68]

The Yangtze, Brahmaputra, Salween and Mekong rivers, as you might have already realized if you took a good look at the map at the beginning of the post, all begin in Tibet.

Since Tibet was recently conquered by China, I was initially thinking that perhaps an ancient Chinese group had imposed their language on the Tibetans some time in the remote past, but Tibetans heading downstream and possibly conquering the people below makes a lot more sense.

oh look, it’s our friends the Ainu

According to About World Languages, Proto-Sino-Tibetan may have split into its Tibeto- and Sinitic- branches about 4,000 BC. This is about the same time Proto-Indo-European started splitting up, so we have some idea of what a language family looks like when it’s that old; much older, and the languages start becoming so distinct that reconstruction becomes more difficult.

But if we look at the available genetic data a little more closely, we see that there are some major differences between Tibetans and their Sinitic neighbors–most notably, many Tibetan men belong to Y-Chromosome haplogroup D, while most Han Chinese men belong to haplogroup O with a smattering of Haplogroup C, which may have arrived via the Mongols.

According to Wikipedia:

The distribution of Haplogroup D-M174 is found among nearly all the populations of Central Asia and Northeast Asia south of the Russian border, although generally at a low frequency of 2% or less. A dramatic spike in the frequency of D-M174 occurs as one approaches the Tibetan Plateau. D-M174 is also found at high frequencies among Japanese people, but it fades into low frequencies in Korea and China proper between Japan and Tibet.


It is found today at high frequency among populations in Tibet, the Japanese archipelago, and the Andaman Islands, though curiously not in India. The Ainu of Japan are notable for possessing almost exclusively Haplogroup D-M174 chromosomes, although Haplogroup C-M217 chromosomes also have been found in 15% (3/20) of sampled Ainu males. Haplogroup D-M174 chromosomes are also found at low to moderate frequencies among populations of Central Asia and northern East Asia as well as the Han and Miao–Yao peoples of China and among several minority populations of Sichuan and Yunnan that speak Tibeto-Burman languages and reside in close proximity to the Tibetans.[5]

Unlike haplogroup C-M217, Haplogroup D-M174 is not found in the New World…

Haplogroup D-M174 is also remarkable for its rather extreme geographic differentiation, with a distinct subset of Haplogroup D-M174 chromosomes being found exclusively in each of the populations that contains a large percentage of individuals whose Y-chromosomes belong to Haplogroup D-M174: Haplogroup D-M15 among the Tibetans (as well as among the mainland East Asian populations that display very low frequencies of Haplogroup D-M174 Y-chromosomes), Haplogroup D-M55 among the various populations of the Japanese Archipelago, Haplogroup D-P99 among the inhabitants of Tibet, Tajikistan and other parts of mountainous southern Central Asia, and paragroup D-M174 without tested positive subclades (probably another monophyletic branch of Haplogroup D) among the Andaman Islanders. Another type (or types) of paragroup D-M174 without tested positive subclades is found at a very low frequency among the Turkic and Mongolic populations of Central Asia, amounting to no more than 1% in total. This apparently ancient diversification of Haplogroup D-M174 suggests that it may perhaps be better characterized as a “super-haplogroup” or “macro-haplogroup.” In one study, the frequency of Haplogroup D-M174 without tested positive subclades found among Thais was 10%.

Haplogroup D’s sister clade, Haplogroup E, (both D and E are descended from Haplogroup DE), is found almost exclusively in Africa.

Haplogroup D is therefore very ancient, estimated at 50-60,000 years old. Haplogroup O, by contrast, is only about 30,000 years old.

On the subject of Han genetics, Wikipedia states:

Y-chromosome haplogroup O3 is a common DNA marker in Han Chinese, as it appeared in China in prehistoric times. It is found in more than 50% of Chinese males, and ranging up to over 80% in certain regional subgroups of the Han ethnicity.[100] However, the mitochondrial DNA (mtDNA) of Han Chinese increases in diversity as one looks from northern to southern China, which suggests that male migrants from northern China married with women from local peoples after arriving in modern-day Guangdong, Fujian, and other regions of southern China.[101][102] … Another study puts Han Chinese into two groups: northern and southern Han Chinese, and it finds that the genetic characteristics of present-day northern Han Chinese was already formed as early as three-thousand years ago in the Central Plain area.[109]

(Note that 3,000 years ago is potentially a thousand years after the first expansion of Proto-Sino-Tibetan.)

The estimated contribution of northern Hans to southern Hans is substantial in both paternal and maternal lineages and a geographic cline exists for mtDNA. As a result, the northern Hans are the primary contributors to the gene pool of the southern Hans. However, it is noteworthy that the expansion process was dominated by males, as is shown by a greater contribution to the Y-chromosome than the mtDNA from northern Hans to southern Hans. These genetic observations are in line with historical records of continuous and large migratory waves of northern China inhabitants escaping warfare and famine, to southern China.

Interestingly, the page on Tibetans notes, ” It is thought that most of the Tibeto-Burman-speakers in Southwest China, including the Tibetans, are direct descendants from the ancient Qiang.[6]

On the Qiang:

The term “Qiang” appears in the Classic of Poetry in reference to Tang of Shang (trad. 1675–1646 BC).[14] They seem to have lived in a diagonal band from northern Shaanxi to northern Henan, somewhat to the south of the later Beidi. They were enemy of the Shang dynasty, who mounted expeditions against them, capturing slaves and victims for human sacrifice. The Qiang prisoners were skilled in making oracle bones.[15]

This ancient tribe is said to be the progenitor of both the modern Qiang and the Tibetan people.[16] There are still many ethnological and linguistic links between the Qiang and the Tibetans.[16] The Qiang tribe expanded eastward and joined the Han people in the course of historical development, while the other branch that traveled southwards, crosses over the Hengduan Mountains, and entered the Yungui Plateau; some went even farther, to Burma, forming numerous ethnic groups of the Tibetan-Burmese language family.[17] Even today, from linguistic similarities, their relative relationship can be seen.

So here’s what I think happened (keeping in mind that I am in no way an expert on these subjects):

  1. About 8,000 years ago: neolithic people lived in Asia. (People of some sort have been living in Asia since Homo erectus, after all.) The ancestors of today’s Sino-Tibetans lived atop the Tibetan plateau.
  2. About 6,000 years ago: the Tibetans headed downstream, following the course of local rivers. In the process, the probably conquered and absorbed many of the local tribes they encountered.
  3. About 4,000 years ago: the Han and Qiang are ethnically and linguistically distinct, though the Qiang are still fairly similar to the Tibetans.
  4. The rest of Chinese history: Invasion from the north. Not only did the Mongols invade and kill somewhere between 20 and 60 million Chinese people in the 13th century, but there were also multiple of invasions/migrations by people who were trying to get away from the Mongols.

Note that while the original proto-Sino-Tibetan invasion likely spread Tibetan Y-Chromosomes throughout southern China, the later Mongol and other Chinese invasions likely wiped out a large percent of those same chromosomes, as invaders both tend to be men and to kill men; women are more likely to survive invasions.

Most recently, of course, the People’s Republic of China conquered Tibet in 1951.

I’m sure there’s a lot I’m missing that would be obvious to an expert.