Book Club: The Code Economy pt 1

I don’t think the publishers got their money’s worth on cover design

Welcome to EvX’s Book Club. Today we begin our exciting tour of Philip E. Auerswald’s The Code Eoconomy: A Forty-Thousand-Year History. with the introduction, Technology = Recipes, and Chapter one, Jobs: Divide and Coordinate if we get that far.

I’m not sure exactly how to run a book club, so just grab some coffee and let’s dive right in.

First, let’s note that Auerswald doesn’t mean code in the narrow sense of “commands fed into a computer” but in a much broader sense of all encoded processes humans have come up with. His go-to example is the cooking recipe.

The Code Economy describes the evolution of human productive activity from simplicity to complexity over the span of more than 40,000 years. I call this evolutionary process the advance of code.

I find the cooking example a bit cutesy, but otherwise it gets the job done.

How… have we humans managed to get where we are today despite our abundant failings, including wars, famine, and a demonstrably meager capacity for society-wide planning and coordination? … by developing productive activities that evolve into regular routines and standardized platforms–which is to say that we have survived, and thrived, by creating and advancing code.

There’s so much in this book that almost every sentence bears discussion. First, as I’ve noted before, social organization appears to be a spontaneous emergent feature of every human group. Without even really meaning to, humans just naturally seem compelled organize themselves. One day you’re hanging out with your friends, riding motorcycles, living like an outlaw, and the next thing you know you’re using the formal legal system to sue a toy store for infringement of your intellectual property.

Alexander Wienberger, Holodomor

At the same time, our ability to organize society at the national level is completely lacking. As one of my professors once put it, “God must hate communists, because every time a country goes communist, an “act of god” occurs and everyone dies.”

It’s a mystery why God hates communists so much, but hate ’em He does. Massive-scale social engineering is a total fail and we’ll still be suffering the results for a long time.

This creates a kind of conflict, because people can look at the small-scale organizing they do, and they look at large-scale disorganization, and struggle to understand why the small stuff can’t simply be scaled up.

And yet… society still kind of works. I can go to the grocery store and be reasonably certain that by some magical process, fresh produce has made its way from fields in California to the shelf in front of me. By some magical process, I can wave a piece of plastic around and use it to exchange enough other, unseen goods to pay for my groceries. I can climb into a car I didn’t build and cruise down a network of streets and intersections, reasonably confident that everyone else driving their own two-ton behemoth at 60 miles an hour a few feet away from me has internalized the same rules necessary for not crashing into me. Most of the time. And I can go to the gas station and pour a miracle liquid into my car and the whole system works, whether or not I have any clue how all of the parts manage to come together and do so.

The result is a miracle. Modern society is a miracle. If you don’t believe me, try using an outhouse for a few months. Try carrying all of your drinking water by hand from the local stream and chopping down all of the wood you need to boil it to make it potable. Try fighting off parasites, smallpox, or malaria without medicine or vaccinations. For all my complaints (and I know I complain a lot,) I love civilization. I love not worrying about cholera, crop failure, or dying from cavities. I love air conditioning, refrigerators, and flush toilets. I love books and the internet and domesticated strawberries. All of these are things I didn’t create and can’t take credit for, but get to enjoy nonetheless. I have been blessed.

But at the same time, “civilization” isn’t equally distributed. Millions (billions?) of the world’s peoples don’t have toilets, electricity, refrigerators, or even a decent road from their village to the next.

GDP per capita by country

Auerswald is a passionate champion of code. His answer to unemployment problems is probably “learn to code,” but in such a broad, metaphorical way that encompasses so many human activities that we can probably forgive him for it. One thing he doesn’t examine is why code takes off in some places but not others. Why is civilization more complex in Hong Kong than in Somalia? Why does France boast more Fields Medalists than the DRC?

In our next book (Niall Ferguson’s The Great Degeneration,) we’ll discuss whether specific structures like legal and tax codes can affect how well societies grow and thrive (spoiler alert: they do, just see communism,) and of course you are already familiar with the Jared Diamond environmentalist theory that folks in some parts of the world just had better natural resources to work than in other parts (also true, at least in some cases. I’m not expecting some great industry to get up and running on its own in the arctic.)

IQ by country

But laying these concerns aside, there are obviously other broad factors at work. A map of GDP per capita looks an awful lot like a map of average IQs, with obvious caveats about the accidentally oil-rich Saudis and economically depressed ex-communists.

Auerswald believes that the past 40,000 years of code have not been disasters for the human race, but rather a cascade of successes, as each new invention and expansion to our repertoir of “recipes” or “codes” has enabled a whole host of new developments. For example, the development of copper tools didn’t just put flint knappers out of business, it also opened up whole new industries because you can make more varieties of tools out of copper than flint. Now we had copper miners, copper smelters (a  new profession), copper workers. Copper tools could be sharpened and, unlike stone, resharpened, making copper tools more durable. Artists made jewelry; spools of copper wires became trade goods, traveling long distances and stimulating the prehistoric “economy.” New code bequeaths complexity and even more code, not mass flint-knapper unemployment.

Likewise, the increase in reliable food supply created by farming didn’t create mass hunter-gatherer unemployment, but stimulated the growth of cities and differentiation of humans into even more professions, like weavers, cobblers, haberdashers, writers, wheelwrights, and mathematicians.

It’s a hopeful view, and I appreciate it in these anxious times.

But it’s very easy to say that the advent of copper or bronze or agriculture was a success because we are descended from the people who succeeded. We’re not descended from the hunter-gatherers who got displaced or wiped out by agriculturalists. In recent cases where hunter-gatherer or herding societies were brought into the agriculturalist fold, the process has been rather painful.

Elizabeth Marshall Thomas’s The Harmless People, about the Bushmen of the Kalahari, might overplay the romance and downplay the violence, but the epilogue’s description of how the arrival of “civilization” resulted in the deaths and degradation of the Bushmen brought tears to my eyes. First they died of dehydration because new fences erected to protect “private property” cut them off from the only water. No longer free to pursue the lives they had lived for centuries, they were moved onto what are essentially reservations and taught to farm and herd. Alcoholism and violence became rampant.

Among the book’s many characters was a man who had lost most of his leg to snakebite. He suffered terribly as his leg rotted away, cared for by his wife and family who brought him food. Eventually, with help, he healed and obtained a pair of crutches, learned to walk again, and resumed hunting: providing for his family.

And then in “civilization” he was murdered by one of his fellow Bushmen.

It’s a sad story and there are no easy answers. Bushman life is hard. Most people, when given the choice, seem to pick civilization. But usually we aren’t given a choice. The Bushmen weren’t. Neither were factory workers who saw their jobs automated and outsourced. Some Bushmen will adapt and thrive. Nelson Mandela was part Bushman, and he did quite well for himself. But many will suffer.

What to do about the suffering of those left behind–those who cannot cope with change, who do not have the mental or physical capacity to “learn to code” or otherwise adapt remains an unanswered question. Humanity might move on without them, ignoring their suffering because we find them undeserving of compassion–or we might get bogged down trying to save them all. Perhaps we can find a third route: sympathy for the unfortunate without encouraging obsolete behavior?

In The Great Degeneration, Ferguson wonders why the systems (“code”) that supports our society appears to be degenerating. I have a crude but answer: people are getting stupider. It takes a certain amount of intelligence to run a piece of code. Even a simple task like transcribing numbers is better performed by a smarter person than a dumber person, who is more likely to accidentally write down the wrong number. Human systems are built and executed by humans, and if the humans in them are less intelligent than the ones who made them, then they will do a bad job of running the systems.

Unfortunately for those of us over in civilization, dysgenics is a real thing:

Source: Audacious Epigone

Whether you blame IQ itself or the number of years smart people spend in school, dumb people have more kids (especially the parents of the Baby Boomers.) Epigone here only looks at white data (I believe Jayman has the black data and it’s just as bad, if not worse.)

Of course we can debate about the Flynn effect and all that, but I suspect there two competing things going on: First, a rising 50’s economic tide lifted all boats, making everyone healthier and thus smarter and better at taking IQ tests and making babies, and second, declining infant mortality since the late 1800s and possibly the Welfare state made it easier for the children of the poorest and least capable parents to survive.

The effects of these two trends probably cancel out at first, but after a while you run out of Flynn effect (maybe) and then the other starts to show up. Eventually you get Greece: once the shining light of Civilization, now defaulting on its loans.

Well, we have made it a page in!

Termite City

What do you think of the book? Have you finished it yet? What do you think of the way Auersbach conceptualizes of “code” and its basis as the building block of pretty much all human activity? Do you think Auersbach is essentially correct to be hopeful about our increasingly code-driven future, or should we beware of the tradeoffs to individual autonomy and freedom inherent in becoming a glorified colony of ants?

Advertisements

A mercifully short note on Lice and the Invention of Clothes

Lice apparently come in three varieties: head, body, and pubic. The body louse’s genome was published in 2010 and is the shortest known insect genome. (Does parasitism reduce genome length?) According to Wikipedia:

Pediculus humanus humanus (the body louse) is indistinguishable in appearance from Pediculus humanus capitis (the head louse) but will interbreed only under laboratory conditions. In their natural state, they occupy different habitats and do not usually meet. In particular, body lice have evolved to attach their eggs to clothes, whereas head lice attach their eggs to the base of hairs.

So when did the clothes-infesting body louse decide to stop associating with its hair-clinging cousins?

The body louse diverged from the head louse at around 100,000 years ago, hinting at the time of the origin of clothing.[7][8][9]

So, did Neanderthals have clothes? Or did they survive winters in ice age Europe by being really hairy?

Behavioral modernity–such as intentional burials and cave painting–is thought to have emerged around 50,000 years ago. Some people push this date back to 80,000 years ago, possibly just before the Out of Africa event (something that made people smarter and better at making tools may have been necessary for OOA to succeed.)

But perhaps we should consider the invention of clothing alongside other technological breakthroughs that made us modern–after all, I don’t think we hairless apes could have had much success at conquering the planet without clothes.

(On the other hand, other Wikipedia pages give other estimates for the origin of clothing, some even also citing louse studies, so I’m not sure of the 100k YA date, but surely clothes were invented before we went anywhere cold.)

Oddly, though, there appears to have been at least one human group that managed to survive in a cold climate without much in the way of clothes, the Yaghan people of Tierra del Fuego. In fact, the whole reason the region got named Tierra del Fuego (translation: Land of the fire) is because the nearly-naked locals carried fire with them wherever they went to stay warm.

Only 100-1,600 Yaghans remain; their language is an isolate with only one native speaker, and she’s 89 years old.

Unfortunately, searching for “people with no clothes” does not return any useful information about other groups that might have led similar lifestyles.

PS: Pubic lice evolved from gorilla lice 3 million yeas ago. I bet you didn’t want to know that. Someone should look for that introgression event.

Native Americans appear to also carry a strain of head lice that had previously occupied Homo erectus’s hair, suggesting that H.e. and the ancestors of today’s N.A.s once met. Since these lice aren’t found elsewhere, it’s evidence that H. e. might have survived somewhere out there until fairly recently.

When did language evolve?

The smartest non-human primates, like Kanzi the bonobo and Koko the gorilla, understand about 2,000 to 4,000 words. Koko can make about 1,000 signs in sign language and Kanzi can use about 450 lexigrams (pictures that stand for words.) Koko can also make some onomatopoetic words–that is, she can make and use imitative sounds in conversation.

A four year human knows about 4,000 words, similar to an exceptional gorilla. An adult knows about 20,000-35,000 words. (Another study puts the upper bound at 42,000.)

Somewhere along our journey from ape-like hominins to homo sapiens sapiens, our ancestors began talking, but exactly when remains a mystery. The origins of writing have been amusingly easy to discover, because early writers were fond of very durable surfaces, like clay, stone, and bone. Speech, by contrast, evaporates as soon as it is heard–leaving no trace for archaeologists to uncover.

But we can find the things necessary for speech and the things for which speech, in turn, is necessary.

The main reason why chimps and gorillas, even those taught human language, must rely on lexigrams or gestures to communicate is that their voiceboxes, lungs, and throats work differently than ours. Their semi-arborial lifestyle requires using the ribs as a rigid base for the arm and shoulder muscles while climbing, which in turn requires closing the lungs while climbing to provide support for the ribs.

Full bipedalism released our early ancestors from the constraints on airway design imposed by climbing, freeing us to make a wider variety of vocalizations.

Now is the perfect time to break out my file of relevant human evolution illustrations:

Source: Scientific American What Makes Humans Special

We humans split from our nearest living ape relatives about 7-8 million years ago, but true bipedalism may not have evolved for a few more million years. Since there are many different named hominins, here is a quick guide:

Source: Macroevolution in and Around the Hominin Clade

Australopithecines (light blue in the graph,) such as the famous Lucy, are believed to have been the first fully bipedal hominins, although, based on the shape of their toes, they may have still occasionally retreated into the trees. They lived between 4 and 2 million years ago.

Without delving into the myriad classification debates along the lines of “should we count this set of skulls as a separate species or are they all part of the natural variation within one species,” by the time the homo genus arises with H Habilis or H. Rudolfensis around 2.8 million years ag, humans were much worse at climbing trees.

Interestingly, one direction humans have continued evolving in is up.

Oldowan tool

The reliable production of stone tools represents an enormous leap forward in human cognition. The first known stone tools–Oldowan–are about 2.5-2.6 million years old and were probably made by homo Habilis. These simple tools are typically shaped only one one side.

By the Acheulean–1.75 million-100,000 years ago–tool making had become much more sophisticated. Not only did knappers shape both sides of both the tops and bottoms of stones, but they also made tools by first shaping a core stone and then flaking derivative pieces from it.

The first Acheulean tools were fashioned by h Erectus; by 100,000 years ago, h Sapiens had presumably taken over the technology.

Flint knapping is surprisingly difficult, as many an archaeology student has discovered.

These technological advances were accompanied by steadily increasing brain sizes.

I propose that the complexities of the Acheulean tool complex required some form of language to facilitate learning and teaching; this gives us a potential lower bound on language around 1.75 million years ago. Bipedalism gives us an upper bound around 4 million years ago, before which our voice boxes were likely more restricted in the sounds they could make.

A Different View

Even though “homo Sapiens” has been around for about 300,000 years (or so we have defined the point where we chose to differentiate between our species and the previous one,) “behavioral modernity” only emerged around 50,000 years ago (very awkward timing if you know anything about human dispersal.)

Everything about behavioral modernity is heavily contested (including when it began,) but no matter how and when you date it, compared to the million years or so it took humans to figure out how to knap the back side of a rock, human technologic advance has accelerated significantly over the past 100,000 and even moreso over the past 50,000 and even 10,000.

Fire was another of humanity’s early technologies:

Claims for the earliest definitive evidence of control of fire by a member of Homo range from 1.7 to 0.2 million years ago (Mya).[1] Evidence for the controlled use of fire by Homo erectus, beginning some 600,000 years ago, has wide scholarly support.[2][3] Flint blades burned in fires roughly 300,000 years ago were found near fossils of early but not entirely modern Homo sapiens in Morocco.[4] Evidence of widespread control of fire by anatomically modern humans dates to approximately 125,000 years ago.[5]

What prompted this sudden acceleration? Noam Chomsky suggests that it was triggered by the evolution of our ability to use and understand language:

Noam Chomsky, a prominent proponent of discontinuity theory, argues that a single chance mutation occurred in one individual in the order of 100,000 years ago, installing the language faculty (a component of the mind–brain) in “perfect” or “near-perfect” form.[6]

(Pumpkin Person has more on Chomsky.)

More specifically, we might say that this single chance mutation created the capacity for figurative or symbolic language, as clearly apes already have the capacity for very simple language. It was this ability to convey abstract ideas, then, that allowed humans to begin expressing themselves in other abstract ways, like cave painting.

I disagree with this view on the grounds that human groups were already pretty widely dispersed by 100,000 years ago. For example, Pygmies and Bushmen are descended from groups of humans who had already split off from the rest of us by then, but they still have symbolic language, art, and everything else contained in the behavioral modernity toolkit. Of course, if a trait is particularly useful or otherwise successful, it can spread extremely quickly (think lactose tolerance,) and neither Bushmen nor Pygmies were 100% genetically isolated for the past 250,000 years, but I simply think the math here doesn’t work out.

However, that doesn’t mean Chomsky isn’t on to something. For example, Johanna Nichols (another linguist,) used statistical models of language differentiation to argue that modern languages split around 100,000 years ago.[31] This coincides neatly with the upper bound on the Out of Africa theory, suggesting that Nichols may actually have found the point when language began differentiating because humans left Africa, or perhaps she found the origin of the linguistic skills necessary to accomplish humanity’s cross-continental trek.

Philip Lieberman and Robert McCarthy looked at the shape of Neanderthal, homo Erectus, early h Sapiens and modern h Sapiens’ vocal tracts:

In normal adults these two portions of the SVT form a right angle to one another and are approximately equal in length—in a 1:1 proportion. Movements of the tongue within this space, at its midpoint, are capable of producing tenfold changes in the diameter of the SVT. These tongue maneuvers produce the abrupt diameter changes needed to produce the formant frequencies of the vowels found most frequently among the world’s languages—the “quantal” vowels [i], [u], and [a] of the words “see,” “do,” and “ma.” In contrast, the vocal tracts of other living primates are physiologically incapable of producing such vowels.

(Since juvenile humans are shaped differently than adults, they pronounce sounds slightly differently until their voiceboxes fully develop.)

Their results:

…Neanderthal necks were too short and their faces too long to have accommodated equally proportioned SVTs. Although we could not reconstruct the shape of the SVT in the Homo erectus fossil because it does not preserve any cervical vertebrae, it is clear that its face (and underlying horizontal SVT) would have been too long for a 1:1 SVT to fit into its head and neck. Likewise, in order to fit a 1:1 SVT into the reconstructed Neanderthal anatomy, the larynx would have had to be positioned in the Neanderthal’s thorax, behind the sternum and clavicles, much too low for effective swallowing. …

Surprisingly, our reconstruction of the 100,000-year-old specimen from Israel, which is anatomically modern in most respects, also would not have been able to accommodate a SVT with a 1:1 ratio, albeit for a different reason. … Again, like its Neanderthal relatives, this early modern human probably had an SVT with a horizontal dimension longer than its vertical one, translating into an inability to reproduce the full range of today’s human speech.

It was only in our reconstruction of the most recent fossil specimens—the modern humans postdating 50,000 years— that we identified an anatomy that could have accommodated a fully modern, equally proportioned vocal tract.

Just as small children who can’t yet pronounce the letter “r” can nevertheless make and understand language, I don’t think early humans needed to have all of the same sounds as we have in order to communicate with each other. They would have just used fewer sounds.

The change in our voiceboxes may not have triggered the evolution of language, but been triggered by language itself. As humans began transmitting more knowledge via language, humans who could make more sounds could utter a greater range of words perhaps had an edge over their peers–maybe they were seen as particularly clever, or perhaps they had an easier time organizing bands of hunters and warriors.

One of the interesting things about human language is that it is clearly simultaneously cultural–which language you speak is entirely determined by culture–and genetic–only humans can produce language in the way we do. Even the smartest chimps and dolphins cannot match our vocabularies, nor imitate our sounds. Human infants–unless they have some form of brain damage–learn language instinctually, without conscious teaching. (Insert reference to Steven Pinker.)

Some kind of genetic changes were obviously necessary to get from apes to human language use, but exactly what remains unclear.

A variety of genes are associated with language use, eg FOXP2. H Sapiens and chimps have different versions of the FOXP2 gene, (and Neanderthals have a third, but more similar to the H Sapiens version than the chimp,) but to my knowledge we have yet to discover exactly when the necessary mutations arose.

Despite their impressive skulls and survival in a harsh, novel climate, Neanderthals seem not to have engaged in much symbolic activity, (though to be fair, they were wiped out right about the time Sapiens really got going with its symbolic activity.) Homo Sapiens and Homo Nanderthalis split around 800-400,000 years ago–perhaps the difference in our language genes ultimately gave Sapiens the upper hand.

Just as farming appears to have emerged relatively independently in several different locations around the world at about the same time, so behavioral modernity seems to have taken off in several different groups around the same time. Of course we can’t rule out the possibility that these groups had some form of contact with each other–peaceful or otherwise–but it seems more likely to me that similar behaviors emerged in disparate groups around the same time because the cognitive precursors necessary for those behaviors had already begun before they split.

Based on genetics, the shape of their larynges, and their cultural toolkits, Neanderthals probably did not have modern speech, but they may have had something similar to it. This suggests that at the time of the Sapiens-Neanderthal split, our common ancestor possessed some primitive speech capacity.

By the time Sapiens and Neanderthals encountered each other again, nearly half a million years later, Sapiens’ language ability had advanced, possibly due to further modification of FOXP2 and other genes like it, plus our newly modified voiceboxes, while Neanderthals’ had lagged. Sapiens achieved behavioral modernity and took over the planet, while Neanderthals disappeared.

 

North Africa in Genetics and History

detailed map of African and Middle Eastern ethnicities in Haaks et al’s dataset

North Africa is an often misunderstood region in human genetics. Since it is in Africa, people often assume that it contains the same variety of people referenced in terms like “African Americans,” “black Africans,” or even just “Africans.” In reality, the African content contains members of all three of the great human clades–Sub-Saharan Africans in the south, Polynesians (Asian clade) in Madagascar, and Caucasians in the north.

The North African Middle Stone Age and its place in recent human evolution provides an overview of the first 275,000 years of humanity’s history in the region(300,000-25,000 years ago, more or less), including the development of symbolic culture and early human dispersal. Unfortunately the paper is paywalled.

Throughout most of human history, the Sahara–not the Mediterranean or Red seas–has been the biggest local impediment to human migration–thus North Africans are much closer, genetically, to their neighbors in Europe and the Middle East than their neighbors across the desert (and before the domestication of the camel, about 3,000 years ago, the Sahara was even harder to cross.)

But from time to time, global weather patterns change and the Sahara becomes a garden: the Green Sahara. The last time we had a Green Sahara was about 9-7,000 years ago; during this time, people lived, hunted, fished, herded and perhaps farmed throughout areas that are today nearly uninhabited wastes.

The Peopling of the last Green Sahara revealed by high-coverage resequencing of trans-Saharan patrilineages sheds light on how the Green (and subsequently brown) Sahara affected the spread (and separation) of African groups into northern and sub-Saharan:

In order to investigate the role of the last Green Sahara in the peopling of Africa, we deep-sequence the whole non-repetitive portion of the Y chromosome in 104 males selected as representative of haplogroups which are currently found to the north and to the south of the Sahara. … We find that the coalescence age of the trans-Saharan haplogroups dates back to the last Green Sahara, while most northern African or sub-Saharan clades expanded locally in the subsequent arid phase. …

Our findings suggest that the Green Sahara promoted human movements and demographic expansions, possibly linked to the adoption of pastoralism. Comparing our results with previously reported genome-wide data, we also find evidence for a sex-biased sub-Saharan contribution to northern Africans, suggesting that historical events such as the trans-Saharan slave trade mainly contributed to the mtDNA and autosomal gene pool, whereas the northern African paternal gene pool was mainly shaped by more ancient events.

In other words, modern North Africans have some maternal (female) Sub-Saharan DNA that arrived recently via the Islamic slave trade, but most of their Sub-Saharan Y-DNA (male) is much older, hailing from the last time the Sahara was easy to cross.

Note that not much DNA is shared across the Sahara:

After the African humid period, the climatic conditions became rapidly hyper-arid and the Green Sahara was replaced by the desert, which acted as a strong geographic barrier against human movements between northern and sub-Saharan Africa.

A consequence of this is that there is a strong differentiation in the Y chromosome haplogroup composition between the northern and sub-Saharan regions of the African continent. In the northern area, the predominant Y lineages are J-M267 and E-M81, with the former being linked to the Neolithic expansion in the Near East and the latter reaching frequencies as high as 80 % in some north-western populations as a consequence of a very recent local demographic expansion [810]. On the contrary, sub-Saharan Africa is characterised by a completely different genetic landscape, with lineages within E-M2 and haplogroup B comprising most of the Y chromosomes. In most regions of sub-Saharan Africa, the observed haplogroup distribution has been linked to the recent (~ 3 kya) demic diffusion of Bantu agriculturalists, which brought E-M2 sub-clades from central Africa to the East and to the South [1117]. On the contrary, the sub-Saharan distribution of B-M150 seems to have more ancient origins, since its internal lineages are present in both Bantu farmers and non-Bantu hunter-gatherers and coalesce long before the Bantu expansion [1820].

In spite of their genetic differentiation, however, northern and sub-Saharan Africa share at least four patrilineages at different frequencies, namely A3-M13, E-M2, E-M78 and R-V88.

A recent article in Nature, “Whole Y-chromosome sequences reveal an extremely recent origin of the most common North African paternal lineage E-M183 (M81),” tells some of North Africa’s fascinating story:

Here, by using whole Y chromosome sequences, we intend to shed some light on the historical and demographic processes that modelled the genetic landscape of North Africa. Previous studies suggested that the strategic location of North Africa, separated from Europe by the Mediterranean Sea, from the rest of the African continent by the Sahara Desert and limited to the East by the Arabian Peninsula, has shaped the genetic complexity of current North Africans15,16,17. Early modern humans arrived in North Africa 190–140 kya (thousand years ago)18, and several cultures settled in the area before the Holocene. In fact, a previous study by Henn et al.19 identified a gradient of likely autochthonous North African ancestry, probably derived from an ancient “back-to-Africa” gene flow prior to the Holocene (12 kya). In historic times, North Africa has been populated successively by different groups, including Phoenicians, Romans, Vandals and Byzantines. The most important human settlement in North Africa was conducted by the Arabs by the end of the 7th century. Recent studies have demonstrated the complexity of human migrations in the area, resulting from an amalgam of ancestral components in North African groups15,20.

According to the article, E-M81 is dominant in Northwest Africa and absent almost everywhere else in the world.

The authors tested various men across north Africa in order to draw up a phylogenic tree of the branching of E-M183:

The distribution of each subhaplogroup within E-M183 can be observed in Table 1 and Fig. 2. Indeed, different populations present different subhaplogroup compositions. For example, whereas in Morocco almost all subhaplogorups are present, Western Sahara shows a very homogeneous pattern with only E-SM001 and E-Z5009 being represented. A similar picture to that of Western Sahara is shown by the Reguibates from Algeria, which contrast sharply with the Algerians from Oran, which showed a high diversity of haplogroups. It is also worth to notice that a slightly different pattern could be appreciated in coastal populations when compared with more inland territories (Western Sahara, Algerian Reguibates).

Overall, the authors found that the haplotypes were “strikingly similar” to each other and showed little geographic structure besides the coastal/inland differences:

As proposed by Larmuseau et al.25, the scenario that better explains Y-STR haplotype similarity within a particular haplogroup is a recent and rapid radiation of subhaplogroups. Although the dating of this lineage has been controversial, with dates proposed ranging from Paleolithic to Neolithic and to more recent times17,22,28, our results suggested that the origin of E-M183 is much more recent than was previously thought. … In addition to the recent radiation suggested by the high haplotype resemblance, the pattern showed by E-M183 imply that subhaplogroups originated within a relatively short time period, in a burst similar to those happening in many Y-chromosome haplogroups23.

In other words, someone went a-conquering.

Alternatively, given the high frequency of E-M183 in the Maghreb, a local origin of E-M183 in NW Africa could be envisaged, which would fit the clear pattern of longitudinal isolation by distance reported in genome-wide studies15,20. Moreover, the presence of autochthonous North African E-M81 lineages in the indigenous population of the Canary Islands, strongly points to North Africa as the most probable origin of the Guanche ancestors29. This, together with the fact that the oldest indigenous inviduals have been dated 2210 ± 60 ya, supports a local origin of E-M183 in NW Africa. Within this scenario, it is also worth to mention that the paternal lineage of an early Neolithic Moroccan individual appeared to be distantly related to the typically North African E-M81 haplogroup30, suggesting again a NW African origin of E-M183. A local origin of E-M183 in NW Africa > 2200 ya is supported by our TMRCA estimates, which can be taken as 2,000–3,000, depending on the data, methods, and mutation rates used.

However, the authors also note that they can’t rule out a Middle Eastern origin for the haplogroup since their study simply doesn’t include genomes from Middle Eastern individuals. They rule out a spread during the Neolithic expansion (too early) but not the Islamic expansion (“an extensive, male-biased Near Eastern admixture event is registered ~1300 ya, coincidental with the Arab expansion20.”) Alternatively, they suggest E-M183 might have expanded near the end of the third Punic War. Sure, Carthage (in Tunisia) was defeated by the Romans, but the era was otherwise one of great North African wealth and prosperity.

 

Interesting papers! My hat’s off to the authors. I hope you enjoyed them and get a chance to RTWT.

Anthropology Friday: Numbers and the Making of Us, part 2

Welcome to part 2 of my review of Caleb Everett’s Numbers and the Making of Us: Counting and the Course of Human Cultures.

I was really excited about this book when I picked it up at the library. It has the word “numbers” on the cover and a subtitle that implies a story about human cultural and cognitive evolution.

Regrettably, what could have been a great books has turned out to be kind of annoying. There’s some fascinating information in here–for example, there’s a really interesting part on pages 249-252–but you have to get through pages 1-248 to get there. (Unfortunately, sometimes authors put their most interesting bits at the end so that people looking to make trouble have gotten bored and wandered off by then.)

I shall try to discuss/quote some of the book’s more interesting bits, and leave aside my differences with the author (who keeps reiterating his position that mathematical ability is entirely dependent on the culture you’re raised in.) Everett nonetheless has a fascinating perspective, having actually spent much of his childhood in a remote Amazonian village belonging to the Piraha, who have no real words for numbers. (His parents were missionaries.)

Which languages contain number words? Which don’t? Everett gives a broad survey:

“…we can reach a few broad conclusions about numbers in speech. First, they are common to nearly all of the world’s languages. … this discussion has shown that number words, across unrelated language, tend to exhibit striking parallels, since most languages employ a biologically based body-part model evident in their number bases.”

That is, many languages have words that translate essentially to “One, Two, Three, Four, Hand, … Two hands, (10)… Two Feet, (20),” etc., and reflect this in their higher counting systems, which can end up containing a mix of base five, 10, and 20. (The Romans, for example, used both base five and ten in their written system.)

“Third, the linguistic evidence suggests not only that this body-part model has motivated the innovation of numebers throughout the world, but also that this body-part basis of number words stretches back historically as far as the linguistic data can take us. It is evident in reconstruction of ancestral languages, including Proto-Sino-Tibetan, Proto-Niger-Congo, Proto-Autronesian, and Proto-Indo-European, the languages whose descendant tongues are best represented in the world today.”

Note, though, that linguistics does not actually give us a very long time horizon. Proto-Indo-European was spoken about 4-6,000 years ago. Proto-Sino-Tibetan is not as well studied yet as PIE, but also appears to be at most 6,000 years old. Proto-Niger-Congo is probably about 5-6,000 years old. Proto-Austronesian (which, despite its name, is not associated with Australia,) is about 5,000 years old.

These ranges are not a coincidence: languages change as they age, and once they have changed too much, they become impossible to classify into language families. Older languages, like Basque or Ainu, are often simply described as isolates, because we can’t link them to their relatives. Since humanity itself is 200,000-300,000 years old, comparative linguistics only opens a very short window into the past. Various groups–like the Amazonian tribes Everett studies–split off from other groups of humans thousands 0r hundreds of thousands of years before anyone started speaking Proto-Indo-European. Even agriculture, which began about 10,000-15,000 years ago, is older than these proto-languages (and agriculture seems to have prompted the real development of math.)

I also note these language families are the world’s biggest because they successfully conquered speakers of the world’s other languages. Spanish, Portuguese, and English are now widely spoken in the Americas instead of Cherokee, Mayan, and Nheengatu because Indo-European language speakers conquered the speakers of those languages.

The guy with the better numbers doesn’t always conquer the guy with the worse numbers–the Mongol conquest of China is an obvious counter. But in these cases, the superior number system sticks around, because no one wants to replace good numbers with bad ones.

In general, though, better tech–which requires numbers–tends to conquer worse tech.

Which means that even though our most successful language families all have number words that appear to be about 4-6,000 years old, we shouldn’t assume this was the norm for most people throughout most of history. Current human numeracy may be a very recent phenomenon.

“The invention of number is attainable by the human mind but is attained through our fingers. Linguistic data, both historical and current, suggest that numbers in disparate cultures have arisen independently, on an indeterminate range of occasions, through the realization that hands can be used to name quantities like 5 and 10. … Words, our ultimate implements for abstract symbolization, can thankfully be enlisted to denote quantities. But they are usually enlisted only after people establish a more concrete embodied correspondence between their finger sand quantities.”

Some more on numbers in different languages:

“Rare number bases have been observed, for instance, in the quaternary (base-4) systems of Lainana languages of California, or in the senary (base-6) systems that are found in southern New Guinea. …

Several languages in Melanesia and Polynesia have or once had number system that vary in accordance with the type of object being counted. In the case of Old High Fijian, for instance, the word for 100 was Bola when people were counting canoes, but Kora when they were counting coconuts. …

some languages in northwest Amazonia base their numbers on kinship relationships. This is true of Daw and Hup two related language in the region. Speakers of the former languages use fingers complemented with words when counting from 4 to 10. The fingers signify the quantity of items being counted, but words are used to denote whether the quantity is odd or even. If the quantity is even, speakers say it “has a brother,” if it is odd they state it “has no brother.”

What about languages with no or very few words for numbers?

In one recent survey of limited number system, it was found that more than a dozen languages lack bases altogether, and several do not have words for exact quantities beyond 2 and, in some cases, beyond 1. Of course, such cases represent a miniscule fraction of the world’s languages, the bulk of which have number bases reflecting the body-part model. Furthermore, most of the extreme cases in question are restricted geographically to Amazonia. …

All of the extremely restricted languages, I believe, are used by people who are hunter-gatherers or horticulturalists, eg, the Munduruku. Hunter gatherers typically don’t have a lot of goods to keep track of or trade, fields to measure or taxes to pay, and so don’t need to use a lot of numbers. (Note, however, that the Inuit/Eskimo have a perfectly normal base-20 counting system. Their particularly harsh environment appears to have inspired both technological and cultural adaptations.) But why are Amazonian languages even less numeric than those of other hunter-gatherers from similar environments, like central African?

Famously, most of the languages of Australia have somewhat limited number system, and some linguists previously claimed that most Australian language slack precise terms for quantities beyond 2…. [however] many languages on that continent actually have native means of describing various quantities in precise ways, and their number words for small quantities can sometimes be combined to represent larger quantities via the additive and even multiplicative usage of bases. …

Of the nearly 200 Australian languages considered in the survey, all have words to denote 1 and 2. In about three-quarters of the languages, however, the highest number is 3 or 4. Still, may of the languages use a word for “two” as a base for other numbers. Several of the languages use a word for “five” as a base, an eight of the languages top out at a word for “ten.”

Everett then digresses into what initially seems like a tangent about grammatical number, but luckily I enjoy comparative linguistics.

In an incredibly comprehensive survey of 1,066 languages, linguist Matthew Dryer recently found that 98 of them are like Karitiana and lack a grammatical means of marking nouns of being plural. So it is not particularly rare to find languages in which numbers do not show plurality. … about 90% of them, have a grammatical means through which speakers can convey whether they are talking about one or more than one thing.

Mandarin is a major language that has limited expression of plurals. According to Wikipedia:

The grammar of Standard Chinese shares many features with other varieties of Chinese. The language almost entirely lacks inflection, so that words typically have only one grammatical form. Categories such as number (singular or plural) and verb tense are frequently not expressed by any grammatical means, although there are several particles that serve to express verbal aspect, and to some extent mood.

Some languages, such as modern Arabic and Proto-Indo-European also have a “dual” category distinct from singular or plural; an extremely small set of languages have a trial category.

Many languages also change their verbs depending on how many nouns are involved; in English we say “He runs; they run;” languages like Latin or Spanish have far more extensive systems.

In sum: the vast majority of languages distinguish between 1 and more than one; a few distinguish between one, two, and many, and a very few distinguish between one, two, three, and many.

From the endnotes:

… some controversial claims of quadral markers, used in restricted contexts, have been made for the Austronesian languages Tangga, Marshallese, and Sursurunga. .. As Corbett notes in his comprehensive survey, the forms are probably best considered quadral markers. In fact, his impressive survey did not uncover any cases of quadral marking in the world’s languages.

Everett tends to bury his point; his intention in this chapter is to marshal support for the idea that humans have an “innate number sense” that allows them to pretty much instantly realize if they are looking at 1, 2, or 3 objects, but does not allow for instant recognition of larger numbers, like 4. He posits a second, much vaguer number sense that lets us distinguish between “big” and “small” amounts of things, eg, 10 looks smaller than 100, even if you can’t count.

He does cite actual neuroscience on this point–he’s not just making it up. Even newborn humans appear to be able to distinguish between 1, 2, and 3 of something, but not larger numbers. They also seem to distinguish between some and a bunch of something. Anumeric peoples, like the Piraha, also appear to only distinguish between 1, 2, and 3 items with good accuracy, though they can tell “a little” “some” and “a lot” apart. Everett also cites data from animal studies that find, similarly, that animals can distinguish 1, 2, and 3, as well as “a little” and “a lot”. (I had been hoping for a discussion of cephalopod intelligence, but unfortunately, no.)

How then, Everett asks, do we wed our specific number sense (1, 2, and 3) with our general number sense (“some” vs “a lot”) to produce ideas like 6, 7, and a googol? He proposes that we have no innate idea of 6, nor ability to count to 10. Rather, we can count because we were taught to (just as some highly trained parrots and chimps can.) It is only the presence of number words in our languages that allows us to count past 3–after all, anumeric people cannot.

But I feel like Everett is railroading us to a particular conclusion. For example, he sites neurology studies that found one part of the brain does math–the intraparietal suclus (IPS)–but only one part? Surely there’s more than one part of the brain involved in math.

About 5 seconds of Googling got me “Neural Basis of Mathematical Cognition,” which states that:

The IPS turns out to be part of the extensive network of brain areas that support human arithmetic (Figure 1). Like all networks it is distributed, and it is clear that numerical cognition engages perceptual, motor, spatial and mnemonic functions, but the hub areas are the parietal lobes …

(By contrast, I’ve spent over half an hour searching and failing to figure out how high octopuses can count.)

Moreover, I question the idea that the specific and general number senses are actually separate. Rather, I suspect there is only one sense, but it is essentially logarithmic. For example, hearing is logarithmic (or perhaps exponential,) which is why decibels are also logarithmic. Vision is also logarithmic:

The eye senses brightness approximately logarithmically over a moderate range (but more like a power law over a wider range), and stellar magnitude is measured on a logarithmic scale.[14] This magnitude scale was invented by the ancient Greek astronomer Hipparchus in about 150 B.C. He ranked the stars he could see in terms of their brightness, with 1 representing the brightest down to 6 representing the faintest, though now the scale has been extended beyond these limits; an increase in 5 magnitudes corresponds to a decrease in brightness by a factor of 100.[14] Modern researchers have attempted to incorporate such perceptual effects into mathematical models of vision.[15][16]

So many experiments have revealed logarithmic responses to stimuli that someone has formulated a mathematical “law” on the matter:

Fechner’s law states that the subjective sensation is proportional to the logarithm of the stimulus intensity. According to this law, human perceptions of sight and sound work as follows: Perceived loudness/brightness is proportional to logarithm of the actual intensity measured with an accurate nonhuman instrument.[3]

p = k ln ⁡ S S 0 {\displaystyle p=k\ln {\frac {S}{S_{0}}}\,\!}

The relationship between stimulus and perception is logarithmic. This logarithmic relationship means that if a stimulus varies as a geometric progression (i.e., multiplied by a fixed factor), the corresponding perception is altered in an arithmetic progression (i.e., in additive constant amounts). For example, if a stimulus is tripled in strength (i.e., 3 x 1), the corresponding perception may be two times as strong as its original value (i.e., 1 + 1). If the stimulus is again tripled in strength (i.e., 3 x 3 x 3), the corresponding perception will be three times as strong as its original value (i.e., 1 + 1 + 1). Hence, for multiplications in stimulus strength, the strength of perception only adds. The mathematical derivations of the torques on a simple beam balance produce a description that is strictly compatible with Weber’s law.[6][7]

In any logarithmic scale, small quantities–like 1, 2, and 3–are easy to distinguish, while medium quantities–like 101, 102, and 103–get lumped together as “approximately the same.”

Of course, this still doesn’t answer the question of how people develop the ability to count past 3, but this is getting long, so we’ll continue our discussion next week.

Review: Numbers and the Making of Us, by Caleb Everett

I’m about halfway through Caleb Everett’s Numbers and the Making of Us: Counting and the Course of Human Cultures. Everett begins the book with a lengthy clarification that he thinks everyone in the world has equal math abilities, some of us just happen to have been exposed to more number ideas than others. Once that’s out of the way, the book gets interesting.

When did humans invent numbers? It’s hard to say. We have notched sticks from the Paleolithic, but no way to tell if these notches were meant to signify numbers or were just decorated.

The slightly more recent Ishango, Lebombo, and Wolf bones (30,000 YA, Czech Republic) seem more likely to indicate that someone was at least counting–if not keeping track–of something.

The Ishango bone (estimated 20,000 years old, found in the Democratic Republic of the Congo near the headwaters of the Nile,) has three sets of notches–two sets total to 60, the third to 48. Interestingly, the notches are grouped, with both sets of sixty composed of primes: 19 + 17 + 13 + 11 and 9 + 19 + 21 + 11. The set of 48 contains groups of 3, 6, 4, 8, 10, 5, 5, and 7. Aside from the stray seven, the sequence tantalizingly suggests that someone was doubling numbers.

Ishango Bone

The Ishango bone also has a quartz point set into the end, which perhaps allowed it to be used for scraping, drawing, or etching–or perhaps it just looked nice atop someone’s decorated bone.

The Lebombo bone, (estimated 43-44,2000 years old, found near the border between South Africa and Swaziland,) is quite similar to the Ishango bone, but only contains 29 notches (as far as we can tell–it’s broken.)

I’ve seen a lot of people proclaiming “Scientists think it was used to keep track of menstrual cycles. Menstruating African women were the first mathematicians!” so I’m just going to let you in on a little secret: scientists have no idea what it was for. Maybe someone was just having fun putting notches on a bone. Maybe someone was trying to count all of their relatives. Maybe someone was counting days between new and full moons, or counting down to an important date.

Without a far richer archaeological assembly than one bone, we have no idea what this particular person might have wanted to count or keep track of. (Also, why would anyone want to keep track of menstrual cycles? You’ll know when they happen.)

The Wolf bone (30,000 years old, Czech Republic,) has received far less interest from folks interested in proclaiming that menstruating African women were the first mathematicians, but is a nice looking artifact with 60 notches–notches 30 and 31 are significantly longer than the others, as though marking a significant place in the counting (or perhaps just the middle of the pattern.)

Everett cites another, more satisfying tally stick: a 10,000 year old piece of antler found in the anoxic waters of Little Salt Spring, Florida. The antler contains two sets of marks: 28 (or possibly 29–the top is broken in a way that suggests another notch might have been a weak point contributing to the break) large, regular, evenly spaced notches running up the antler, and a much smaller set of notches set beside and just slightly beneath the first. It definitely looks like someone was ticking off quantities of something they wanted to keep track of.

Here’s an article with more information on Little Salt Spring and a good photograph of the antler.

I consider the bones “maybes” and the Little Salt Spring antler a definite for counting/keeping track of quantities.

Inca Quipu

Everett also mentions a much more recent and highly inventive tally system: the Incan quipu.

A quipu is made of knotted strings attached to one central string. A series of knots along the length of each string denotes numbers–one knot for 1, two for 2, etc. The knots are grouped in clusters, allowing place value–first cluster for the ones, second for the tens, third for hundreds, etc. (And a blank space for a zero.)

Thus a sequence of 2 knots, 4 knots, a space, and 5 knots = 5,402

The Incas, you see, had an empire to administer, no paper, but plenty of lovely alpaca wool. So being inventive people, they made do.

Everett then discusses the construction of names for numbers/base systems in different languages. Many languages use a combination of different bases, eg, “two twos” for four, (base 2,) “two hands” to signify 10 (base 5,) and from there, words for multiples of 10 or 20, (base 10 or 20,) can all appear in the same language. He argues convincingly that most languages derived their counting words from our original tally sticks: fingers and toes, found in quantities of 5, 10, and 20. So the number for 5 in a language might be “one hand”, the number for 10, “Two hands,” and the number for 20 “one person” (two hands + two feet.) We could express the number 200 in such a language by saying “two hands of one person”= 10 x 20.

(If you’re wondering how anyone could come up with a base 60 system, such as we inherited from the Babylonians for telling time, try using the knuckles of the four fingers on one hand [12] times the fingers of the other hand [5] to get 60.)

Which begs the question of what counts as a “number” word (numeral). Some languages, it is claimed, don’t have words for numbers higher than 3–but put out an array of 6 objects, and their speakers can construct numbers like “three twos.” Is this a number? What about the number in English that comes after twelve: four-teen, really just a longstanding mispronunciation of four and ten?

Perhaps a better question than “Do they have a word for it,” is “Do they have a common, easy to use word for it?” English contains the world nonillion, but you probably don’t use it very often (and according to the dictionary, a nonillion is much bigger in Britain than in the US, which makes it especially useless.) By contrast, you probably use quantities like a hundred or a thousand all the time, especially when thinking about household budgets.

Roman Numerals are really just an advanced tally system with two bases: 5 and 10. IIII are clearly regular tally marks. V (5) is similar to our practice of crossing through four tally marks. X (10) is two Vs set together. L (50) is a rotated V. C (100) is an abbreviation for the Roman word Centum, hundred. (I, V, X, and L are not abbreviations.) I’m not sure why 500 is D; maybe just because D follows C and it looks like a C with an extra line. M is short for Mille, or thousand. Roman numerals are also fairly unique in their use of subtraction in writing numbers, which few people do because it makes addition horrible. Eg, IV and VI are not the same number, nor do they equal 15 and 51. No, they equal 4 (v-1) and 6 (v+1,) respectively. Adding or multiplying large Roman numerals quickly becomes cumbersome; if you don’t believe me, try XLVII times XVIII with only a pencil and paper.

Now imagine you’re trying to run an empire this way.

You’re probably thinking, “At least those quipus had a zero and were reliably base ten,” about now.

Interestingly, the Mayans (and possibly the Olmecs) already had a proper symbol that they used for zero in their combination base-5/base-20 system with pretty functional place value at a time when the Greeks and Romans did not (the ancient Greeks were philosophically unsure about this concept of a “number that isn’t there.”)

(Note: given the level of sophistication of Native American civilizations like the Inca, Aztec, and Maya, and the fact that these developed in near total isolation, they must have been pretty smart. Their current populations appear to be under-performing relative to their ancestors.)

But let’s let Everett have a chance to speak:

Our increasingly refined means of survival and adaptation are the result of a cultural ratchet. This term, popularized by Duke University psychologist and primatologist Michael Tomasello, refers to the fact that humans cooperatively lock in knowledge from one generation to the next, like the clicking of a ratchet. In other word, our species’ success is due in large measure to individual members’ ability to learn from and emulate the advantageous behavior of their predecessors and contemporaries in their community. What makes humans special is not simply that we are so smart, it is that we do not have to continually come up with new solutions to the same old problems. …

Now this is imminently reasonable; I did not invent the calculus, nor could I have done so had it not already existed. Luckily for me, Newton and Leibniz already invented it and I live in a society that goes to great lengths to encode math in textbooks and teach it to students.

I call this “cultural knowledge” or “cultural memory,” and without it we’d still be monkeys with rocks.

The importance of gradually acquired knowledge stored in the community, culturally reified but not housed in the mind of any one individual, crystallizes when we consider cases in which entire cultures have nearly gone extinct because some of their stored knowledge dissipated due to the death of individuals who served as crucial nodes in their community’s knowledge network. In the case of the Polar Inuit of Northwest Greenland, population declined in the mid-nineteenth century after an epidemic killed several elders of the community. These elders were buried along with their tool sand weapons, in accordance with local tradition, and the Inuits’ ability to manufacture the tools and weapons in question was severely compromised. … As a result, their population did not recover until about 40 years later, when contact with another Inuit group allowed for the restoration of the communal knowledge base.

The first big advance, the one that separates us from the rest of the animal kingdom, was language itself. Yes, other animals can communicate–whales and birds sing; bees do their waggle dance–but only humans have full-fledged, generative language which allows us to both encode and decode new ideas with relative ease. Language lets different people in a tribe learn different things and then pool their ideas far more efficiently than mere imitation.

The next big leap was the development of visual symbols we could record–and read–on wood, clay, wax, bones, cloth, cave walls, etc. Everett suggests that the first of these symbols were likely tally marks such us those found on the Lebombo bone, though of course the ability to encode a buffalo on the wall of the Lascaux cave, France, was also significant. From these first symbols we developed both numbers and letters, which eventually evolved into books.

Books are incredible. Books are like external hard drives for your brain, letting you store, access, and transfer information to other people well beyond your own limits of memorization and well beyond a human lifetime. Books reach across the ages, allowing us to read what philosophers, poets, priests and sages were thinking about a thousand years ago.

Recently we invented an even more incredible information storage/transfer device: computers/the internet. To be fair, they aren’t as sturdy as clay tablets, (fired clay is practically immortal,) but they can handle immense quantities of data–and make it searchable, an incredibly important task.

But Everett tries to claim that cultural ratchet is all there is to human mathematical ability. If you live in a society with calculus textbooks, then you can learn calculus, and if you don’t, you can’t. Everett does not want to imply that Amazonian tribesmen with no words for numbers bigger than three are in any way less able to do math than the Mayans with their place value system and fancy zero.

But this seems unlikely for two reasons. First, we know very well that even in societies with calculus textbooks, not everyone can make use of them. Even among my own children, who have been raised with about as similar an environment as a human can make and have very similar genetics, there’s a striking difference in intellectual strengths and weaknesses. Humans are not identical in their abilities.

Moreover, we know that different mental tasks are performed in different, specialized parts of the brain. For example, we decode letters in the “visual word form area” of the brain; people whose VWAs have been damaged can still read, but they have to use different parts of their brains to work out the letters and they end up reading more slowly than they did before.

Memorably, before he died, the late Henry Harpending (of West Hunter) had a stroke while in Germany. He initially didn’t notice the stroke because it was located in the part of the brain that decodes letters into words, but since he was in Germany, he didn’t expect to read the words, anyway. It was only when he looked at something written in English later that day that he realized he couldn’t read it, and soon after I believe he passed out and was taken to the hospital.

Why should our brains have a VWA at all? It’s not like our primate ancestors did a whole lot of reading. It turns out that the VWA is repurposed from the part of our brain that recognizes faces :)

Likewise, there are specific regions of the brain that handle mathematical tasks. People who are better at math not only have more gray matter in these regions, but they also have stronger connections between them, letting the work together in harmony to solve different problems. We don’t do math by just throwing all of our mental power at a problem, but by routing it through specific regions of our brain.

Interestingly, humans and chimps differ in their ability to recognize faces and perceive emotions. (For anatomical reasons, chimps are more inclined to identify each other’s bottoms than each other’s faces.) We evolved the ability to recognize faces–the region of our brain we use to decode letters–when we began walking upright and interacting to each other face to face, though we do have some vestigial interest in butts and butt-like regions (“My eyes are up here.”) Our brains have evolved over the millenia to get better at specific tasks–in this case, face reading, a precursor to decoding symbolic language.

And there is a tremendous quantity of evidence that intelligence is at least partly genetic–estimates for the heritablity of intelligence range between 60 and 80%. The rest of the variation–the environmental part–looks to be essentially random chance, such as accidents, nutrition, or perhaps your third grade teacher.

So, yes, we absolutely can breed people for mathematical or linguistic ability, if that’s what the environment is selecting for. By contrast, if there have been no particular mathematical or linguistic section pressures in an environment (a culture with no written language, mathematical notation, and very few words for numbers clearly is not experiencing much pressure to use them), then you won’t select for such abilities. The question is not whether we can all be Newtons, (or Leibnizes,) but how many Newtons a society produces and how many people in that society have the potential to understand calculus, given the chance.

I do wonder why he made the graph so much bigger than the relevant part
Lifted gratefully from La Griffe Du Lion’s Smart Fraction II article

Just looking at the state of different societies around the world (including many indigenous groups that live within and have access to modern industrial or post-industrial technologies), there is clear variation in the average abilities of different groups to build and maintain complex societies. Japanese cities are technologically advanced, clean, and violence-free. Brazil, (which hasn’t even been nuked,) is full of incredibly violent, unsanitary, poorly-constructed favelas. Some of this variation is cultural, (Venezuela is doing particularly badly because communism doesn’t work,) or random chance, (Saudi Arabia has oil,) but some of it, by necessity, is genetic.

But if you find that a depressing thought, take heart: selective pressures can be changed. Start selecting for mathematical and verbal ability (and let everyone have a shot at developing those abilities) and you’ll get more mathematical and verbal abilities.

But this is getting long, so let’s continue our discussion next week.

Book on a Friday: Squid Empire: The Rise and Fall of the Cephalopods by Danna Staaf

Danna Staaf’s Squid Empire: The Rise and Fall of the Cephalopods is about the evolution of squids and their relatives–nautiluses, cuttlefish, octopuses, ammonoids, etc. If you are really into squids or would like to learn more about squids, this is the book for you. If you aren’t big on reading about squids but want something that looks nice on your coffee table and matches your Cthulhu, Flying Spaghetti Monster, and 20,000 Leagues Under the Sea decor, this is the book for you. If you aren’t really into squids, you probably won’t enjoy this book.

Squids, octopuses, etc. are members of the class of cephalopods, just as you are a member of the class of mammals. Mammals are in the phylum of chordates; cephalopods are mollusks. It’s a surprising lineage for one of Earth’s smartest creatures–80% mollusk species are slugs and snails. If you think you’re surrounded by idiots, imagine how squids must feel.

The short story of cephalopodic evolution is that millions upon millions of years ago, most life was still stuck at the bottom of the ocean. There were some giant microbial mats, some slugs, some snails, some worms, and not a whole lot else. One of those snails figured out how to float by removing some of the salt from the water inside its shell, making itself a bit buoyant. Soon after its foot (all mollusks have a “foot”) split into multiple parts. The now-floating snail drifted over the seafloor, using its new tentacles to catch and eat the less-mobile creatures below it.

From here, cephalopods diversified dramatically, creating the famous ammonoids of fossil-dating lore.

Cross-section of a fossilized ammonite shell, revealing internal chambers and septa

Ammonoids are known primarily from their shells (which fossilize well) rather than their fleshy tentacle parts, (which fossilize badly). But shells we have in such abundance they can be easily used for dating other nearby fossils.

Ammonoids are obviously similar to their cousins, the lovely chambered nautiluses. (Please don’t buy nautilus shells; taking them out of their shells kills them and no one farms nautiluses so the shell trade is having a real impact on their numbers. We don’t need their shells, but they do.)

Ammonoids succeeded for millions of years, until the Creatceous extinction event that also took out the dinosaurs. The nautiluses survived–as the author speculates, perhaps because they lay large eggs with much more yolk that develop very slowly, infant nautiluses were able to wait out the event while ammonoids, with their fast-growing, tiny eggs dependent on feeding immediately after hatching simply starved in the upheaval.

In the aftermath, modern squids and octopuses proliferated.

How did we get from floating, shelled snails to today’s squishy squids?

The first step was internalization–cephalopods began growing their fleshy mantles over their shells instead of inside of them–in essence, these invertebrates became vertebrates. Perhaps this was some horrible genetic accident, but it worked out. These internalized shells gradually became smaller and thinner, until they were reduced to a flexible rod called a “pen” that runs the length of a squid’s mantle. (Cuttlefish still retain a more substantial bone, which is frequently collected on beaches and sold for birds to peck at for its calcium.)

With the loss of the buoyant shell, squids had to find another way to float. This they apparently achieved by filling themselves with ammonia salts, which makes them less dense than water but also makes their decomposition disgusting and renders them unfossilizable because they turn to mush too quickly. Octopuses, by contrast, aren’t full of ammonia and so can fossilize.

Since the book is devoted primarily to cephalopod evolution rather than modern cephalopods, it doesn’t go into much depth on the subject of their intelligence. Out of all the invertebrates, cephalopods are easily the most intelligent (perhaps really the only intelligent invertebrates). Why? If cephalopods didn’t exist, we might easily conclude that invertebrates can’t be intelligent–invertebrateness is somehow inimical to intelligence. After all, most invertebrates are about as intelligent as slugs. But cephalopods do exist, and they’re pretty smart.

The obvious answer is that cephalopods can move and are predatory, which requires bigger brains. But why are they the only invertebrates–apparently–who’ve accomplished the task?

But enough jabber–let’s let Mrs. Staaf speak:

I find myself obliged to address the perennial question: “octopuses” or “octopi”? Or, heaven help us, “octopodes”?

Whichever you like best. Seriously. Despite what you may have heard, “octopus” is neither ancient Greek nor Latin. Aristotle called the animal polypous for its “many feet.” The ancient Romans borrowed this word and latinized the spelling to polypus. It was much later that a Renaissance scientists coined and popularized the word “octopus,” using Greek roots for “eight” and “foot” but Latin spelling.

If the word had actually been Greek, it would be spelled octopous and pluralized octopodes. If translated into Latin, it might have become octopes and pluralized octopedes,  but more likely the ancient Roman would have simply borrowed the Greek word–as they did with poly pus. Those who perhaps wished to appear erudite used the Greek plural polypodes, while others favored a Latin ending and pluralized it polypi.

The latter is a tactic we English speakers emulate when we welcome “octopus” into our own language and pluralize it “octopuses” as I’ve chosen to do.

There. That settles it.

Dinosaurs are the poster children for evolution and extinction writ large…

Of course, not all of them did die. We know now that birds are simply modern dinosaurs, but out of habit we tend to reserve the word “dinosaur for the hug ancient creatures that went extinct at the end of the Cretaceous. After all, even if they had feathers, they seem so different from today’s finches and robins. For one thing, the first flying feathered dinosaurs all seem to have had four wings. There aren’t any modern birds with four wings.

Wesl… actually, domestic pigeons can be bred to grow feathers on their legs. Not fuzzy down, but long flight feathers, and along with these feathers their leg bones grow more winglike. The legs are still legs’ they can’t be used to fly like wings. They do, however, suggest a clear step along the road from four-winged dinosaurs to two-winged birds. The difference between pigeons with ordinary legs and pigeons with wing-legs is created by control switches in their DNA that alter the expression of two particular genes. These genes are found in all birds, indeed in all vertebrates,and so were most likely present in dinosaurs as well.

…and I’ve just discovered that almost all of my other bookmarks fell out of the book. Um.

So squid brains are shaped like donuts because their eating/jet propulsion tube runs through the middle of their bodies and thus through the middle of their brains. It seems like this could be a problem if the squid eats too much or eats something with sharp bits in it, but squids seem to manage.

Squids can also leap out of the water and fly through the air for some ways. Octopuses can carry water around in their mantles, allowing them to move on dry land for a few minutes without suffocating.

Since cephalopods are somewhat unique among mollusks for their ability to move quickly, they have a lot in common, genetically, with vertebrates. In essence, they are the most vertebrate-behaving of the mollusks. Convergent evolution.

The vampire squid, despite its name, is actually more of an octopus.

Let me quote from the chapter on sex and babies:

This is one arena in which cephalopods, both ancient and modern, are actually less alien than many aliens–even other mollusks. Slugs, for instance, are hermaphroditic, and in the course of impregnating each other their penises sometimes get tangled, so they chew them off. Nothing in the rest of this chapter will make you nearly that uncomfortable. …

The lovely argonaut octopus

In one living coleoid species, however, sex is blindingly obvious. Females of the octopus known as an argonaut are five times larger than males. (A killer whale is about five times larger than an average adult human, which in turn is about five times large than an opossum.)

This enormous size differential caught the attention of paleontologists who had noticed that many ammonoid species also came in two distinct size, which htey had dubbed microconch (little shell) and macroconch (big shell). Bot were clearly mature, as they had completed the juvenile part of the shell and constructed the final adult living chamber. After an initial flurry of debate, most researchers agreed to model ammonoid sex on modern argonauts, and began to call macroconchs females and microconcs males.

Some fossil nautiloids also come in macroconch and microchonch flavors, though it’s more difficult to be certain that both are adults…

However, the shells of modern nautiluses show the opposite pattern–males are somewhat large than females… Like the nautiloid shift from ten arms to many tens of arms, the pattern could certainly have evolved from a different ancestral condition. If we’re going to make that argument, though, we have to wonder when nautliloids switched from females to males as the larger sex, and why.

In modern species that have larger females, we usually assume the size difference has to do with making or brooding a lot of eggs.Female argonauts take it up a notch and actually secrete a shell-like brood chamber from their arms, using it to cradle numerous batch of eggs over their lifetime. meanwhile, each tiny male argonaut get ot mate only once. His hectocotylus is disposable–after being loaded with sperm and inserted into the female, it breaks off. …

By contrast, when males are the bigger sex, we often guess that the purpose is competition. Certainly many species of squid and cuttlefish have large males that battle for female attention on the mating grounds. They display outrageous skin patterns as they push, shove, and bite each other. Females do appear impressed; at least, they mate with the winning males and consent to be guarded by them. Even in these species, though, there are some mall males who exhibit a totally different mating strategy. While the big males strut their stuff, these small males quietly sidle up to the females, sometimes disguising themselves with female color patterns. This doesn’t put off the real females, who readily mate with these aptly named “sneaker males.” By their very nature, such obfuscating tactics are virtually impossible to glean from the fossil record…

More on octopus mating habits.

This, of course, reminded me of this graph:

In the majority of countries, women are more likely to be overweight than men (suggesting that our measure of “overweight” is probably flawed.) In some countries women are much more likely to be overweight, while in some countries men and women are almost equally likely to be overweight, and in just a few–the Czech Republic, Germany, Hungary, Japan, and barely France, men are more likely to be overweight.

Is there any rhyme or reason to this pattern? Surely affluence is related, but Japan, for all of its affluence, has very few overweight people at all, while Egypt, which is pretty poor, has far more overweight people. (A greater % of Egyptian women are overweight than American women, but American men are more likely to be overweight than Egyptian men.)

Of course, male humans are still–in every country–larger than females. Even an overweight female doesn’t necessarily weigh more than a regular male. But could the variation in male and female obesity rates have anything to do with historic mating strategies? Or is it completely irrelevant?

Back to the book:

Coleoid eyes are as complex as our own, with a lens for focusing light, a retina to detect it, and an iris to sharpen the image. … Despite their common complexity, though, there are some striking differences [between our and squid eyes]. For Example, our retina has a blind spot whee a bundle of nerves enters the eyeball before spreading out to connect to the font of every light receptor. By contrast, light receptors in the coleoid retina are innervated from behind, so there’s no “hole” or blind spot. Structural differences like this how that the two groups converged on similar solution through distinct evolutionary pathways.

Another significant difference is that fish went on to evolve color vision by increasing the variety of light-sensitive proteins in their eyes; coleoids never did and are probably color blind. I say “probably ” because the idea of color blindness in such colorful animals has flummoxed generations of scientists…

“I’m really more of a cuddlefish”

Color-blind or not, coleoids can definitely see something we humans are blind to: the polarization of light.

Sunlight normally consists of waves vibrating in all directions. but when these waves are reflected off certain surface, like water, they get organized and arrive at the retina vibrating in only one direction. We call this “glare” and we don’t like it, so we invented polarized sunglasses. … That’s pretty much all polarized sunglasses can do–block polaraized light. Sadly, they can’t help you decode the secret messages of cuttlefish, which have the ability to perform a sort of double0-talk with their skin, making color camouflage for the befit of polarization-blind predators while flashing polarized displays to their fellow cuttlefish.

That’s amazing. Here’s an article with more on cuttlefish vision and polarization.

Overall, I enjoyed this book. The writing isn’t the most thrilling, but the author has a sense of humor and a deep love for her subject. I recommend it to anyone with a serious hankering to know more about the evolution of squids, or who’d like to learn more about an ancient animal besides dinosaurs.

When Did Black People Evolve?

In previous posts, we discussed the evolution of Whites and Asians, so today we’re taking a look at people from Sub-Saharan Africa.

Modern humans only left Africa about 100,000 to 70,000 yeas ago, and split into Asians and Caucasians around 40,000 years ago. Their modern appearances came later–white skin, light hair and light eyes, for example, only evolved in the past 20,000 and possibly within the past 10,000 years.

What about the Africans, or specifically, Sub-Saharans? (North Africans, like Tunisians and Moroccans, are in the Caucasian clade.) When did their phenotypes evolve?

The Sahara, an enormous desert about the size of the United States, is one of the world’s biggest, most ancient barriers to human travel. The genetic split between SSAs and non-SSAs, therefore, is one of the oldest and most substantial among human populations. But there are even older splits within Africa–some of the ancestors of today’s Pygmies and Bushmen may have split off from other Africans 200,000-300,000 years ago. We’re not sure, because the study of archaic African DNA is still in its infancy.

Some anthropologists refer to Bushmen as “gracile,” which means they are a little shorter than average Europeans and not stockily built

The Bushmen present an interesting case, because their skin is quite light (for Africans.) I prefer to call it golden. The nearby Damara of Namibia, by contrast, are one of the world’s darkest peoples. (The peoples of South Sudan, eg Malik Agar, may be darker, though.) The Pygmies are the world’s shortest peoples; the peoples of South Sudan, such as the Dinka and Shiluk, are among the world’s tallest.

Sub-Saharan Africa’s ethnic groups can be grouped, very broadly, into Bushmen, Pygmies, Bantus (aka Niger-Congo), Nilotics, and Afro-Asiatics. Bushmen and Pygmies are extremely small groups, while Bantus dominate the continent–about 85% of Sub Saharan Africans speak a language from the Niger-Congo family. The Afro-Asiatic groups, as their name implies, have had extensive contact with North Africa and the Middle East.

Most of America’s black population hails from West Africa–that is, the primarily Bantu region. The Bantus and similar-looking groups among the Nilotics and Afro-Asiatics (like the Hausa) are, therefore, have both Africa’s most iconic and most common phenotypes.

For the sake of this post, we are not interested in the evolution of traits common to all humans, such as bipedalism. We are only interested in those traits generally shared by most Sub-Saharans and generally not shared by people outside of Africa.

detailed map of African and Middle Eastern ethnicities in Haaks et al’s dataset

One striking trait is black hair: it is distinctively “curly” or “frizzy.” Chimps and gorrilas do not have curly hair. Neither do whites and Asians. (Whites and Asians, therefore, more closely resemble chimps in this regard.) Only Africans and a smattering of other equatorial peoples like Melanesians have frizzy hair.

Black skin is similarly distinct. Chimps, who live in the shaded forest and have fur, do not have high levels of melanin all over their bodies. While chimps naturally vary in skin tone, an unfortunate, hairless chimp is practically “white.

Humans therefore probably evolved both black skin and frizzy hair at about the same time–when we came out of the shady forests and began running around on the much sunnier savannahs. Frizzy hair seems well-adapted to cooling–by standing on end, it lets air flow between the follicles–and of course melanin is protective from the sun’s rays. (And apparently, many of the lighter-skinned Bushmen suffer from skin cancer.)

Steatopygia also comes to mind, though I don’t know if anyone has studied its origins.

According to Wikipedia, additional traits common to Sub-Saharan Africans include:

In modern craniofacial anthropometry, Negroid describes features that typify skulls of black people. These include a broad and round nasal cavity; no dam or nasal sill; Quonset hut-shaped nasal bones; notable facial projection in the jaw and mouth area (prognathism); a rectangular-shaped palate; a square or rectangular eye orbit shape;[21] a large interorbital distance; a more undulating supraorbital ridge;[22] and large, megadontic teeth.[23] …

Modern cross-analysis of osteological variables and genome-wide SNPs has identified specific genes, which control this craniofacial development. Of these genes, DCHS2, RUNX2, GLI3, PAX1 and PAX3 were found to determine nasal morphology, whereas EDAR impacts chin protrusion.[27] …

Ashley Montagu lists “neotenous structural traits in which…Negroids [generally] differ from Caucasoids… flattish nose, flat root of the nose, narrower ears, narrower joints, frontal skull eminences, later closure of premaxillarysutures, less hairy, longer eyelashes, [and] cruciform pattern of second and third molars.”[28]

The Wikipedia page on Dark Skin states:

As hominids gradually lost their fur (between 4.5 and 2 million years ago) to allow for better cooling through sweating, their naked and lightly pigmented skin was exposed to sunlight. In the tropics, natural selection favoured dark-skinned human populations as high levels of skin pigmentation protected against the harmful effects of sunlight. Indigenous populations’ skin reflectance (the amount of sunlight the skin reflects) and the actual UV radiation in a particular geographic area is highly correlated, which supports this idea. Genetic evidence also supports this notion, demonstrating that around 1.2 million years ago there was a strong evolutionary pressure which acted on the development of dark skin pigmentation in early members of the genus Homo.[25]

About 7 million years ago human and chimpanzee lineages diverged, and between 4.5 and 2 million years ago early humans moved out of rainforests to the savannas of East Africa.[23][28] They not only had to cope with more intense sunlight but had to develop a better cooling system. …

Skin colour is a polygenic trait, which means that several different genes are involved in determining a specific phenotype. …

Data collected from studies on MC1R gene has shown that there is a lack of diversity in dark-skinned African samples in the allele of the gene compared to non-African populations. This is remarkable given that the number of polymorphisms for almost all genes in the human gene pool is greater in African samples than in any other geographic region. So, while the MC1Rf gene does not significantly contribute to variation in skin colour around the world, the allele found in high levels in African populations probably protects against UV radiation and was probably important in the evolution of dark skin.[57][58]

Skin colour seems to vary mostly due to variations in a number of genes of large effect as well as several other genes of small effect (TYR, TYRP1, OCA2, SLC45A2, SLC24A5, MC1R, KITLG and SLC24A4). This does not take into account the effects of epistasis, which would probably increase the number of related genes.[59] Variations in the SLC24A5 gene account for 20–25% of the variation between dark and light skinned populations of Africa,[60] and appear to have arisen as recently as within the last 10,000 years.[61] The Ala111Thr or rs1426654 polymorphism in the coding region of the SLC24A5 gene reaches fixation in Europe, and is also common among populations in North Africa, the Horn of Africa, West Asia, Central Asia and South Asia.[62][63][64]

That’s rather interesting about MC1R. It could imply that the difference in skin tone between SSAs and non-SSAs is due to active selection in Blacks for dark skin and relaxed selection in non-Blacks, rather than active selection for light skin in non-Blacks.

The page on MC1R states:

MC1R is one of the key proteins involved in regulating mammalianskin and hair color. …It works by controlling the type of melanin being produced, and its activation causes the melanocyte to switch from generating the yellow or red phaeomelanin by default to the brown or black eumelanin in replacement. …

This is consistent with active selection being necessary to produce dark skin, and relaxed selection producing lighter tones.

Studies show the MC1R Arg163Gln allele has a high frequency in East Asia and may be part of the evolution of light skin in East Asian populations.[40] No evidence is known for positive selection of MC1R alleles in Europe[41] and there is no evidence of an association between MC1R and the evolution of light skin in European populations.[42] The lightening of skin color in Europeans and East Asians is an example of convergent evolution.

However, we should also note:

Dark-skinned people living in low sunlight environments have been recorded to be very susceptible to vitamin D deficiency due to reduced vitamin D synthesis. A dark-skinned person requires about six times as much UVB than lightly pigmented persons.

PCA graph and map of sampling locations. Modern people are indicated with gray circles.

Unfortunately, most of the work on human skin tones has been done among Europeans (and, oddly, zebra fish,) limiting our knowledge about the evolution of African skin tones, which is why this post has been sitting in my draft file for months. Luckily, though, two recent studies–Loci Associated with Skin Pigmentation Identified in African Populations and Reconstructing Prehistoric African Population Structure–have shed new light on African evolution.

In Reconstructing Prehistoric African Population Structure, Skoglund et al assembled genetic data from 16 prehistoric Africans and compared them to DNA from nearby present-day Africans. They found:

  1. The ancestors of the Bushmen (aka the San/KhoiSan) once occupied a much wider area.
  2. They contributed about 2/3s of the ancestry of ancient Malawi hunter-gatherers (around 8,100-2,500 YA)
  3. Contributed about 1/3 of the ancestry of ancient Tanzanian hunter-gatherers (around 1,400 YA)
  4. Farmers (Bantus) spread from west Africa, completely replacing hunter-gatherers in some areas
  5. Modern Malawians are almost entirely Bantu.
  6. A Tanzanian pastoralist population from 3,100 YA spread out across east Africa and into southern Africa
  7. Bushmen ancestry was not found in modern Hadza, even though they are hunter-gatherers and speak a click language like the Bushmen.
  8. The Hadza more likely derive most of their ancestry from ancient Ethiopians
  9. Modern Bantu-speakers in Kenya derive from a mix between western Africans and Nilotics around 800-400 years ago.
  10. Middle Eastern (Levant) ancestry is found across eastern Africa from an admixture event that occurred around 3,000 YA, or around the same time as the Bronze Age Collapse.
  11. A small amount of Iranian DNA arrived more recently in the Horn of Africa
  12. Ancient Bushmen were more closely related to modern eastern Africans like the Dinka (Nilotics) and Hadza than to modern west Africans (Bantus),
  13. This suggests either complex relationships between the groups or that some Bantus may have had ancestors from an unknown group of humans more ancient than the Bushmen.
  14. Modern Bushmen have been evolving darker skins
  15. Pygmies have been evolving shorter stature
Automated clustering of ancient and modern populations (moderns in gray)

I missed #12-13 on my previous post about this paper, though I did note that the more data we get on ancient African groups, the more likely I think we are to find ancient admixture events. If humans can mix with Neanderthals and Denisovans, then surely our ancestors could have mixed with Ergaster, Erectus, or whomever else was wandering around.

Distribution of ancient Bushmen and Ethiopian DNA in south and east Africa

#15 is interesting, and consistent with the claim that Bushmen suffer from a lot of skin cancer–before the Bantu expansion, they lived in far more forgiving climates than the Kalahari desert. But since Bushmen are already lighter than their neighbors, this begs the question of how light their ancestors–who had no Levantine admixture–were. Could the Bantus’ and Nilotics’ darker skins have evolved after the Bushmen/everyone else split?

Meanwhile, in Loci Associated with Skin Pigmentation Identified in African Populations, Crawford et al used genetic samples from 1,570 people from across Africa to find six genetic areas–SLC24A5, MFSD12, DDB1, TMEM138, OCA2 and HERC2–which account for almost 30% of the local variation in skin color.

Bantu (green) and Levantine/pastoralist DNA in modern peoples

SLC24A5 is a light pigment introduced to east Africa from the Levant, probably around 3,000 years ago. Today, it is common in Ethiopia and Tanzania.

Interestingly, according to the article, “At all other loci, variants associated with dark pigmentation in Africans are identical by descent in southern Asian and Australo-Melanesian populations.”

These are the world’s other darkest peoples, such as the Jarawas of the Andaman Islands or the Melanesians of Bougainville, PNG. (And, I assume, some groups from India such as the Tamils.) This implies that these groups 1. had dark skin already when they left Africa, and 2. Never lost it on their way to their current homes. (If they had gotten lighter during their journey and then darkened again upon arrival, they likely would have different skin color variants than their African cousins.)

This implies that even if the Bushmen split off (around 200,000-300,000 YA) before dark skin evolved, it had evolved by the time people left Africa and headed toward Australia (around 100,000-70,000 YA.) This gives us a minimum threshold: it most likely evolved before 70,000 YA.

(But as always, we should be careful because perhaps there are even more skin color variant that we don’t know about yet in these populations.)

MFSD12 is common among Nilotics and is related to darker skin.

And according to the abstract, which Razib Khan posted:

Further, the alleles associated with skin pigmentation at all loci but SLC24A5 are ancient, predating the origin of modern humans. The ancestral alleles at the majority of predicted causal SNPs are associated with light skin, raising the possibility that the ancestors of modern humans could have had relatively light skin color, as is observed in the San population today.

The full article is not out yet, so I still don’t know when all of these light and dark alleles emerged, but the order is absolutely intriguing. For now, it looks like this mystery will still have to wait.

 

Two Exciting Papers on African Genetics

I loved that movie
Nǃxau ǂToma, (aka Gcao Tekene Coma,) Bushman star of “The Gods Must be Crazy,” roughly 1944-2003

An interesting article on Clues to Africa’s Mysterious Past appeared recently in the NY Times:

It was only two years ago that researchers found the first ancient human genome in Africa: a skeleton in a cave in Ethiopia yielded DNA that turned out to be 4,500 years old.

On Thursday, an international team of scientists reported that they had recovered far older genes from bone fragments in Malawi dating back 8,100 years. The researchers also retrieved DNA from 15 other ancient people in eastern and southern Africa, and compared the genes to those of living Africans.

Let’s skip to the article, Reconstructing Prehistoric African Population Structure by Skoglund et al:

We assembled genome-wide data from 16 prehistoric Africans. We show that the anciently divergent lineage that comprises the primary ancestry of the southern African San had a wider distribution in the past, contributing approximately two-thirds of the ancestry of Malawi hunter-gatherers ∼8,100–2,500 years ago and approximately one-third of the ancestry of Tanzanian hunter-gatherers ∼1,400 years ago.

Paths of the great Bantu Migration

The San are also known as the Bushmen, a famous group of recent hunter-gatherers from southern Africa.

We document how the spread of farmers from western Africa involved complete replacement of local hunter-gatherers in some regions…

This is most likely the Great Bantu Migration, which I wrote about in Into Africa: the Great Bantu Migration.

…and we track the spread of herders by showing that the population of a ∼3,100-year-old pastoralist from Tanzania contributed ancestry to people from northeastern to southern Africa, including a ∼1,200-year-old southern African pastoralist…

Whereas the two individuals buried in ∼2,000 BP hunter-gatherer contexts in South Africa share ancestry with southern African Khoe-San populations in the PCA, 11 of the 12 ancient individuals who lived in eastern and south-central Africa between ∼8,100 and ∼400 BP form a gradient of relatedness to the eastern African Hadza on the one hand and southern African Khoe-San on the other (Figure 1A).

The Hadza are a hunter-gatherer group from Tanzania who are not obviously related to any other people. Their language has traditionally been classed alongside the languages of the KhoiSan/Bushmen people because they all contain clicks, but the languages otherwise have very little in common and Hadza appears to be a language isolate, like Basque.

The genetic cline correlates to geography, running along a north-south axis with ancient individuals from Ethiopia (∼4,500 BP), Kenya (∼400 BP), Tanzania (both ∼1,400 BP), and Malawi (∼8,100–2,500 BP), showing increasing affinity to southern Africans (both ancient individuals and present-day Khoe-San). The seven individuals from Malawi show no clear heterogeneity, indicating a long-standing and distinctive population in ancient Malawi that persisted for at least ∼5,000 years (the minimum span of our radiocarbon dates) but which no longer exists today. …

We find that ancestry closely related to the ancient southern Africans was present much farther north and east in the past than is apparent today. This ancient southern African ancestry comprises up to 91% of the ancestry of Khoe-San groups today (Table S5), and also 31% ± 3% of the ancestry of Tanzania_Zanzibar_1400BP, 60% ± 6% of the ancestry of Malawi_Fingira_6100BP, and 65% ± 3% of the ancestry of Malawi_Fingira_2500BP (Figure 2A). …

Both unsupervised clustering (Figure 1B) and formal ancestry estimation (Figure 2B) suggest that individuals from the Hadza group in Tanzania can be modeled as deriving all their ancestry from a lineage related deeply to ancient eastern Africans such as the Ethiopia_4500BP individual …

So what’s up with the Tanzanian expansion mentioned in the summary?

Western-Eurasian-related ancestry is pervasive in eastern Africa today … and the timing of this admixture has been estimated to be ∼3,000 BP on average… We found that the ∼3,100 BP individual… associated with a Savanna Pastoral Neolithic archeological tradition, could be modeled as having 38% ± 1% of her ancestry related to the nearly 10,000-year-old pre-pottery farmers of the Levant These results could be explained by migration into Africa from descendants of pre-pottery Levantine farmers or alternatively by a scenario in which both pre-pottery Levantine farmers and Tanzania_Luxmanda_3100BP descend from a common ancestral population that lived thousands of years earlier in Africa or the Near East. We fit the remaining approximately two-thirds of Tanzania_Luxmanda_3100BP as most closely related to the Ethiopia_4500BP…

…present-day Cushitic speakers such as the Somali cannot be fit simply as having Tanzania_Luxmanda_3100BP ancestry. The best fitting model for the Somali includes Tanzania_Luxmanda_3100BP ancestry, Dinka-related ancestry, and 16% ± 3% Iranian-Neolithic-related ancestry (p = 0.015). This suggests that ancestry related to the Iranian Neolithic appeared in eastern Africa after earlier gene flow related to Levant Neolithic populations, a scenario that is made more plausible by the genetic evidence of admixture of Iranian-Neolithic-related ancestry throughout the Levant by the time of the Bronze Age …and in ancient Egypt by the Iron Age …

There is then a discussion of possible models of ancient African population splits (were the Bushmen the first? How long have they been isolated?) I suspect the more ancient African DNA we uncover, the more complicated the tree will become, just as in Europe and Asia we’ve discovered Neanderthal and Denisovan admixture.

They also compared genomes to look for genetic adaptations and found evidence for selection for taste receptors and “response to radiation” in the Bushmen, which the authors note “could be due to exposure to sunlight associated with the life of the ‡Khomani and Ju|’hoan North people in the Kalahari Basin, which has become a refuge for hunter-gatherer populations in the last millenia due to encroachment by pastoralist and agriculturalist groups.”

(The Bushmen are lighter than Bantus, with a more golden or tan skin tone.)

They also found evidence of selection for short stature among the Pygmies (which isn’t really surprising to anyone, unless you thought they had acquired their heights by admixture with another very short group of people.)

Overall, this is a great paper and I encourage you to RTWT, especially the pictures/graphs.

Now, if that’s not enough African DNA for you, we also have Loci Associated with Skin Pigmentation Identified in African Populations, by Crawford et al:

Examining ethnically diverse African genomes, we identify variants in or near SLC24A5, MFSD12, DDB1, TMEM138, OCA2 and HERC2 that are significantly associated with skin pigmentation. Genetic evidence indicates that the light pigmentation variant at SLC24A5 was introduced into East Africa by gene flow from non-Africans. At all other loci, variants associated with dark pigmentation in Africans are identical by descent in southern Asian and Australo-Melanesian populations. Functional analyses indicate that MFSD12 encodes a lysosomal protein that affects melanogenesis in zebrafish and mice, and that mutations in melanocyte-specific regulatory regions near DDB1/TMEM138 correlate with expression of UV response genes under selection in Eurasians.

I’ve had an essay on the evolution of African skin tones sitting in my draft folder for ages because this research hadn’t been done. There’s plenty of research on European and Asian skin tones (skin appears to have significantly lightened around 10,000 years ago in Europeans,) but much less on Africans. Luckily for me, this paper fixes that.

Looks like SLC24A5 is related to that Levantine/Iranian back-migration into Africa documented in the first paper.

Parsis, Travellers, and Human Niches

Irish Travellers, 1954

I.

Why are there many kinds of plants and animals? Why doesn’t the best out-compete, eat, and destroy the others, rising to be the sole dominant species on Earth?

In ecology, a niche is an organism’s specific place within the environment. Some animals eat plants; some eat dung. Some live in the sea; others in trees. Different plants flower and grow in different seasons; some are pollinated by bees and some by flies. Every species has its specific niche.

The Competitive Exclusion Principle (aka Gause’s Law) states that ‘no two species can occupy the same niche’ (or positively, ‘two species coexisting must have different niches.’) For example, if squirrels and chipmunks both want to nest in the treetops and eat nuts, (and there are limited treetops and nuts,) then over time, whichever species is better at finding nuts and controlling the treetops will dominate the niche and the other, less successful species will have to find a new niche.

If squirrels are dominating the treetops and nuts, this leaves plenty of room for rabbits to eat grass and owls to eat squirrels.

II. So I was reading recently about the Parsis and the Travellers. The Parsis, as we discussed on Monday, are Zoroastrians, originally from Persia (modern-day Iran,) who settled in India about a thousand yeas ago. They’re often referred to as the “Jews of India” because they played a similar role in Indian society to that historically played by Jews in Europe.*

*Yes I know there are actual Jews in India.

The Travellers are an Irish group that’s functionally similar to Gypsies, but in fact genetically Irish:

In 2011 an analysis of DNA from 40 Travellers was undertaken at the Royal College of Surgeons in Dublin and the University of Edinburgh. The study provided evidence that Irish Travellers are a distinct Irish ethnic minority, who separated from the settled Irish community at least 1000 years ago; the claim was made that they are as distinct from the settled community as Icelanders are from Norwegians.[36]

It appears that Ireland did not have enough Gypsies of Indian extraction and so had to invent its own.

And though I originally thought that only in jest, why not? Gypsies occupy a particular niche, and if there are Gypsies around, I doubt anyone else is going to out-compete them for that niche. But if there aren’t any, then surely someone else could.

According to Wikipedia, the Travellers traditionally were tinkers, mended tinware (like pots) and acquiring dead/old horses for slaughter.

The Gypsies appear to have been originally itinerant musicians/entertainers, but have also worked as tinkers, smiths, peddlers, miners, and horse traders (today, car salesmen.)

These are not glorious jobs, but they are jobs, and peripatetic people have done them.

Jews (and Parsis, presumably) also filled a social niche, using their network of family/religious ties to other Jews throughout the diaspora as the basis of a high-trust business/trading network at a time when trade was difficult and routes were dangerous.

On the subject of “Madeburg rights” or law in Eastern Europe, Wikipedia notes:

In medieval Poland, Jews were invited along with German merchants to settle in cities as part of the royal city development policy.

Jews and Germans were sometimes competitors in those cities. Jews lived under privileges that they carefully negotiated with the king or emperor. They were not subject to city jurisdiction. These privileges guaranteed that they could maintain communal autonomy, live according to their laws, and be subjected directly to the royal jurisdiction in matters concerning Jews and Christians. One of the provisions granted to Jews was that a Jew could not be made Gewährsmann, that is, he could not be compelled to tell from whom he acquired any object which had been sold or pledged to him and which was found in his possession. Other provisions frequently mentioned were a permission to sell meat to Christians, or employ Christian servants.

External merchants coming into the city were not allowed to trade on their own, but instead forced to sell the goods they had brought into the city to local traders, if any wished to buy them.

Note that this situation is immensely better if you already know the guy you’re selling to inside the city and he’s not inclined to cheat you because you both come from the same small, tight-knit group.

Further:

Under Bolesław III (1102–1139), the Jews, encouraged by the tolerant regime of this ruler, settled throughout Poland, including over the border in Lithuanian territory as far as Kiev.[32] Bolesław III recognized the utility of Jews in the development of the commercial interests of his country. … Mieszko III employed Jews in his mint as engravers and technical supervisors, and the coins minted during that period even bear Hebraic markings.[30] … Jews enjoyed undisturbed peace and prosperity in the many principalities into which the country was then divided; they formed the middle class in a country where the general population consisted of landlords (developing into szlachta, the unique Polish nobility) and peasants, and they were instrumental in promoting the commercial interests of the land.

If you need merchants and goldsmiths, someone will become merchants and goldsmiths. If it’s useful for those merchants and goldsmiths to all be part of one small, close-knit group, then a small, close-knit group is likely to step into that niche and out-compete anyone else trying to occupy it.

The similarity of the Parsis to the Jews probably has less to do with them both being monotheists (after all, Christians, Muslims, and Sikhs are also monotheists,) and more to do with them both being small but widely-flung diasporic communities united by a common religion that allows them to use their group as a source of trustworthy business partners.

Over hundreds or thousands of years, humans might not just move into social niches, but actually become adapted to them–Jew and Parsis are both reported to be very smart, for example.

III. I can think of several other cases of ethnic groups moving into a particular niche. In the US, the gambling and bootleg alcohol trade were long dominated by ethnic Sicilians, while the crack and heroin trades have been dominated by black and Hispanic gangs.

Note that, while these activities are (often) illegal, they are still things that people want to do and the mafia/gangs are basically providing a goods/services to their customers. As they see it, they’re just businessmen. They’re out to make money, not commit random violence.

That said, these guys do commit lots of violence, including murder, blackmail and extortion. Even violent crime can be its own niche, if it pays well enough.

(Ironically, police crackdown on ethnic Sicilian control in NYC coincided with a massive increase in crime–did the mafia, by controlling a particular territory, keep out competing bands of criminals?)

On a more pleasant note, society is now rich enough that many people can make a living as professional sports stars, marry other professional sports stars, and have children who go on to also be professional sports stars. It’s not quite at the level of “a caste of professional athletes genetically optimized for particular sports,” but if this kept up for a few hundred years, it could be.

Similarly, over in Nepal, “Sherpa” isn’t just a job, it’s an ethnic group. Sherpas, due to their high elevation adaptation, have an advantage over the rest of us when it comes to scaling Mt. Everest, and I hear the global mountain-climbing industry pays them well for their services. A Sherpa who can successfully scale Mt. Everest many times, make lots of money, and raise lots of children in an otherwise impoverished nation is thus a successful Sherpa–and contributing to the group’s further genetic and cultural specialization in the “climbing Mt. Everest” niche.

India, of course, is the ultimate case of ethnic groups specializing into specific jobs–it’d be interesting to see what adaptations different groups have acquired over the years.

I also wonder if the caste system is an effective way to minimize competition between groups in a multi-ethnic society, or if it leads to more resentment and instability in the long run.