Homo Sapiens–that is, us, modern humans, are about 200-300,000 years old. Our ancestor, Homo heidelbergensis, lived in Africa around 700,000-300,000 years ago.
Around 700,000 years ago, another group of humans split off from the main group. By 400,000 years ago, their descendants, Homo neanderthalensis–Neanderthals–had arrived in Europe, and another band of their descendants, the enigmatic Denisovans, arrived in Asia.
While we have found quite a few Neanderthal remains and archaeological sites with tools, hearths, and other artifacts, we’ve uncovered very few Denisovan remains–a couple of teeth, a finger bone, and part of an arm in Denisova Cave, Russia. (Perhaps a few other remains I am unaware of.)
Yet from these paltry remains scientists have extracted enough DNA to ascertain that no only were Denisovans a distinct species, but also that Melanesians, Papuans, and Aborigines derive about 3-6% of their DNA from a Denisovan ancestors. (All non-African populations also have a small amount of Neanderthal DNA, derived from a Neanderthal ancestors.)
If Neanderthals and Homo sapiens interbred, and Denisovans and Homo sapiens interbred, did Neanderthals and Denisovans ever mate?
The girl, affectionately nicknamed Denny, lived and died about 90,000 years ago in Siberia. The remains of an arm, found in Denisova Cave, reveal that her mother was a Neanderthal, her father a Denisovan.
We don’t yet know what Denisovans looked like, because we don’t have any complete skeletons of them, much less good skulls to examine, so we don’t know what a Neanderthal-Denisovan hybrid like Denny looked like.
But the fact that we can extract so much information from a single bone–or fragment of bone–preserved in a Siberian cave for 90,000 years–is amazing.
We are still far from truly understanding what sorts of people our evolutionary cousins were, but we are gaining new insights all the time.
“DNA builds products with a purpose. So do people.” –Auerswald, The Code Economy
McDonald’s is the world’s largest restaurant chain by revenue, serving over 69 million customers daily in over 100 countries across approximately 36,900 outlets as of 2016. … According to a BBC report published in 2012, McDonald’s is the world’s second-largest private employer (behind Walmart with 1.9 million employees), 1.5 million of whom work for franchises. …
There are currently a total of 5,669 company-owned locations and 31,230 franchised locations… Notably, McDonald’s has increased shareholder dividends for 25 consecutive years, making it one of the S&P 500 Dividend Aristocrats. …
According to Fast Food Nation by Eric Schlosser (2001), nearly one in eight workers in the U.S. have at some time been employed by McDonald’s. … Fast Food Nation also states that McDonald’s is the largest private operator of playgrounds in the U.S., as well as the single largest purchaser of beef, pork, potatoes, and apples. (Wikipedia)
How did a restaurant whose only decent products are french fries and milkshakes come to dominate the global corporate landscape?
In The Code Economy, Auerswald suggests that the secret to McDonald’s success isn’t (just) the french fries and milkshake machines:
Kroc opened his first McDonald’s restaurant in 1955 in Des Plaines, California. Within five years he had opened two hundred new franchises across the country. [!!!] He pushed his operators obsessively to adhere to a system that reinforced the company motto: “Quality, service, cleanliness, and value.”
Quoting Kroc’s1987 autobiography,
“It’s all interrelated–our development of the restaurant, the training, the marketing advice, the product development, the research that has gone into each element of the equipment package. Together with our national advertising and continuing supervisory assistance, it forms an invaluable support system. Individual operators pay 11.5 percent of their gross to the corporation for all of this…”
The process of operating a McDonald’s franchise was engineered to be as cognitively undemanding as possible. …
Kroc created a program that could be broken into subroutines…. Acting like the DNA of the organization, the manual allowed the Speedee Service System to function in a variety of environments without losing essential structure or function.
McDonald’s is big because it figured out how to reproduce.
I’m not sure why IKEA is so big (I don’t think it’s a franchise like McDonald’s,) but based on the information posted on their walls, it’s because of their approach to furniture design. First, think of a problem, eg, People Need Tables. Second, determine a price–IKEA makes some very cheap items and some pricier items, to suit different customers’ needs. Third, use Standard IKEA Wooden Pieces to design a nice-looking table. Fourth, draw the assembly instructions, so that anyone, anywhere, can assemble the furniture themselves–no translation needed.
IKEA furniture is kind of like Legos, in that much of it is made of very similar pieces of wood assembled in different ways. The wooden boards in my table aren’t that different in size and shape from the ones in my dresser nor the ones in my bookshelf, though the items themselves have pretty different dimensions. So on the production side, IKEA lowers costs by producing not actual furniture, but collections of boards. Boards are easy to make–sawmills produce tons of them.
Furniture is heavy, but mostly empty space. By contrast, piles of boards stack very neatly and compactly, saving space both in shipping and when buyers are loading the boxes into their cars. (I am certain that IKEA accounts for common car dimensions in designing and packing their furniture.)
And the assembly instruction allow the buyer to ultimately construct the furniture.
In other words, IKEA has hit upon a successful code that allows them to produce many different designs from a few basic boards and ship them efficiently–keeping costs low and allowing them to thrive.
The company is also looking for ways to maximize warehouse efficiency.
“We have (only) two pallet sizes,” Marston said, referring to the wooden platforms on which goods are placed. “Our warehouses are dimensioned and designed to hold these two pallet sizes. It’s all about efficiencies because that helps keep the price of innovation down.”
In Europe, some IKEA warehouses utilize robots to “pick the goods,” a term of art for grabbing products off very high shelves.
These factories, Marston said, are dark, since no lighting is needed for the robots, and run 24 hours a day, picking and moving goods around.
“You (can) stand on a catwalk,” she said, “and you look out at this huge warehouse with 12 pallets (stacked on top of each other) and this robot’s running back and forth running on electronic eyebeams.”
IKEA’s code and McDonald’s code are very different, but both let the companies produce the core items they sell quickly, cheaply, and efficiently.
The difficulty with evolution is that systems are complicated; successful mutations or even just combinations of existing genes must work synergistically with all of the other genes and systems already operating in the body. A mutation that increases IQ by tweaking neurons in a particular way might have the side effect of causing neurons outside the brain to malfunction horribly; a mutation that protects against sickle-cell anemia when you have one copy of it might just kill you itself if you have two copies.
Auerswald quotes Kauffman and Levin:
“Natural selection does not work as an engineer works… It works like a tinkereer–a tinkerer who does not know exactly what he is going to produce but uses… everything at his disposal to produce some kind of workable object.” This process is progressive, moving form simpler to more complex forms: “Evolution doe not produce novelties from scratch. It works on what already exists, either transforming a system to give it new functions or combining several systems to produce a more elaborate one [as] during the passage from unicellular to multicellular forms.”
The Kauffman and Levin model was as simple as it was powerful. Imagine a genetic code of length N, where each gene might occupy one of two possible “states”–for example, “o” and “i” in a binary computer. The difficulty of the evolutionary problem was tunable with the parameter K, which represented the average number of interactions among genes. The NK model, as it came to be called, was able to reproduce a number of measurable features of evolution in biological systems. Evolution could be represented as a genetic walk on a fitness landscape, in which increasing complexity was now a central parameter.
Local optima–or optimums, if you prefer–are an illusion created by distance. A man standing on the hilltop at (approximately) X=2 may see land sloping downward all around himself and think that he is at the highest point on the graph. But hand him a telescope, and he discovers that the fellow standing on the hilltop at X=4 is even higher than he is. And hand the fellow at X=4 a telescope, and he’ll discover that X=6 is even higher.
A global optimum is the best possible way of doing something; a local optimum can look like a global optimum because all of the other, similar ways of doing the same thing are worse.
Some notable examples of cultures that were stuck at local optima but were able, with exposure, to jump suddenly to a higher optima: The “opening of Japan” in the late 1800s resulted in breakneck industrialization and rising standards of living; the Cherokee invented their own alphabet (technically a syllabary) after glimpsing the Roman one, and achieved mass literacy within decades; European mathematics and engineering really took off after the introduction of Hindu-Arabic numerals and the base-ten system.
If we consider each culture its own “landscape” in which people (and corporations) are finding locally optimal solutions to problems, then it becomes immediately obvious that we need both a large number of distinct cultures working out their own solutions to problems and occasional communication and feedback between those cultures so results can transfer. If there is only one, global, culture, then we only get one set of solutions–and they will probably be sub-optimal. If we have many cultures but they don’t interact, we’ll get tons of solutions, and many of them will be sub-optimal. But many cultures developing their own solutions and periodically interacting can develop many solutions and discard sub-optimal ones for better ones.
Life constantly makes us take decisions under conditions of uncertainty. We can’t simply compute every possible outcome, and decide with perfect accuracy what the path forward is. We have to use heuristics. Religion is seen as a record of heuristics that have worked in the past. …
But while every generation faces new circumstances, there are also some common problems that every living being is faced with: survival and reproduction, and these are the most important problems because everything else depends on them. Mess with these, and everything else becomes irrelevant.
This makes religion an evolutionary record of solutions which persisted long enough, by helping those who held them to persist.
This is not saying “All religions are perfect and good and we should follow them,” but it is suggesting, “Traditional religions (and cultures) have figured out ways to solve common problems and we should listen to their ideas.”
Back in The Code Economy, Auerswald asks:
Might the same model, derived from evolutionary biology, explain the evolution of technology?
… technology may also be nothing else but the capacity for invariant reproduction. However, in order for more complex forms of technology to be viable over time, technology also must possess a capacity for learning and adaptation.
Evolutionary theory as applied to the advance of code is the focus of the next chapter. Kauffman and Levin’s NK model ends up providing a framework for studying the creation and evolution of code. Learning curves act as the link between biology and economics.
Will the machines become sentient? Or McDonald’s? And which should we worry about?
Welcome to EvX’s Book Club. Today we begin our exciting tour of Philip E. Auerswald’s The Code Eoconomy: A Forty-Thousand-Year History. with the introduction, Technology = Recipes, and Chapter one, Jobs: Divide and Coordinate if we get that far.
I’m not sure exactly how to run a book club, so just grab some coffee and let’s dive right in.
First, let’s note that Auerswald doesn’t mean code in the narrow sense of “commands fed into a computer” but in a much broader sense of all encoded processes humans have come up with. His go-to example is the cooking recipe.
The Code Economy describes the evolution of human productive activity from simplicity to complexity over the span of more than 40,000 years. I call this evolutionary process the advance of code.
I find the cooking example a bit cutesy, but otherwise it gets the job done.
How… have we humans managed to get where we are today despite our abundant failings, including wars, famine, and a demonstrably meager capacity for society-wide planning and coordination? … by developing productive activities that evolve into regular routines and standardized platforms–which is to say that we have survived, and thrived, by creating and advancing code.
There’s so much in this book that almost every sentence bears discussion. First, as I’ve noted before, social organization appears to be a spontaneous emergent feature of every human group. Without even really meaning to, humans just naturally seem compelled organize themselves. One day you’re hanging out with your friends, riding motorcycles, living like an outlaw, and the next thing you know you’re using the formal legal system to sue a toy store for infringement of your intellectual property.
At the same time, our ability to organize society at the national level is completely lacking. As one of my professors once put it, “God must hate communists, because every time a country goes communist, an “act of god” occurs and everyone dies.”
It’s a mystery why God hates communists so much, but hate ’em He does. Massive-scale social engineering is a total fail and we’ll still be suffering the results for a long time.
This creates a kind of conflict, because people can look at the small-scale organizing they do, and they look at large-scale disorganization, and struggle to understand why the small stuff can’t simply be scaled up.
And yet… society still kind of works. I can go to the grocery store and be reasonably certain that by some magical process, fresh produce has made its way from fields in California to the shelf in front of me. By some magical process, I can wave a piece of plastic around and use it to exchange enough other, unseen goods to pay for my groceries. I can climb into a car I didn’t build and cruise down a network of streets and intersections, reasonably confident that everyone else driving their own two-ton behemoth at 60 miles an hour a few feet away from me has internalized the same rules necessary for not crashing into me. Most of the time. And I can go to the gas station and pour a miracle liquid into my car and the whole system works, whether or not I have any clue how all of the parts manage to come together and do so.
The result is a miracle. Modern society is a miracle. If you don’t believe me, try using an outhouse for a few months. Try carrying all of your drinking water by hand from the local stream and chopping down all of the wood you need to boil it to make it potable. Try fighting off parasites, smallpox, or malaria without medicine or vaccinations. For all my complaints (and I know I complain a lot,) I love civilization. I love not worrying about cholera, crop failure, or dying from cavities. I love air conditioning, refrigerators, and flush toilets. I love books and the internet and domesticated strawberries. All of these are things I didn’t create and can’t take credit for, but get to enjoy nonetheless. I have been blessed.
But at the same time, “civilization” isn’t equally distributed. Millions (billions?) of the world’s peoples don’t have toilets, electricity, refrigerators, or even a decent road from their village to the next.
Auerswald is a passionate champion of code. His answer to unemployment problems is probably “learn to code,” but in such a broad, metaphorical way that encompasses so many human activities that we can probably forgive him for it. One thing he doesn’t examine is why code takes off in some places but not others. Why is civilization more complex in Hong Kong than in Somalia? Why does France boast more Fields Medalists than the DRC?
In our next book (Niall Ferguson’s The Great Degeneration,) we’ll discuss whether specific structures like legal and tax codes can affect how well societies grow and thrive (spoiler alert: they do, just see communism,) and of course you are already familiar with the Jared Diamond environmentalist theory that folks in some parts of the world just had better natural resources to work than in other parts (also true, at least in some cases. I’m not expecting some great industry to get up and running on its own in the arctic.)
But laying these concerns aside, there are obviously other broad factors at work. A map of GDP per capita looks an awful lot like a map of average IQs, with obvious caveats about the accidentally oil-rich Saudis and economically depressed ex-communists.
Auerswald believes that the past 40,000 years of code have not been disasters for the human race, but rather a cascade of successes, as each new invention and expansion to our repertoir of “recipes” or “codes” has enabled a whole host of new developments. For example, the development of copper tools didn’t just put flint knappers out of business, it also opened up whole new industries because you can make more varieties of tools out of copper than flint. Now we had copper miners, copper smelters (a new profession), copper workers. Copper tools could be sharpened and, unlike stone, resharpened, making copper tools more durable. Artists made jewelry; spools of copper wires became trade goods, traveling long distances and stimulating the prehistoric “economy.” New code bequeaths complexity and even more code, not mass flint-knapper unemployment.
Likewise, the increase in reliable food supply created by farming didn’t create mass hunter-gatherer unemployment, but stimulated the growth of cities and differentiation of humans into even more professions, like weavers, cobblers, haberdashers, writers, wheelwrights, and mathematicians.
It’s a hopeful view, and I appreciate it in these anxious times.
But it’s very easy to say that the advent of copper or bronze or agriculture was a success because we are descended from the people who succeeded. We’re not descended from the hunter-gatherers who got displaced or wiped out by agriculturalists. In recent cases where hunter-gatherer or herding societies were brought into the agriculturalist fold, the process has been rather painful.
Elizabeth Marshall Thomas’s The Harmless People, about the Bushmen of the Kalahari, might overplay the romance and downplay the violence, but the epilogue’s description of how the arrival of “civilization” resulted in the deaths and degradation of the Bushmen brought tears to my eyes. First they died of dehydration because new fences erected to protect “private property” cut them off from the only water. No longer free to pursue the lives they had lived for centuries, they were moved onto what are essentially reservations and taught to farm and herd. Alcoholism and violence became rampant.
Among the book’s many characters was a man who had lost most of his leg to snakebite. He suffered terribly as his leg rotted away, cared for by his wife and family who brought him food. Eventually, with help, he healed and obtained a pair of crutches, learned to walk again, and resumed hunting: providing for his family.
And then in “civilization” he was murdered by one of his fellow Bushmen.
It’s a sad story and there are no easy answers. Bushman life is hard. Most people, when given the choice, seem to pick civilization. But usually we aren’t given a choice. The Bushmen weren’t. Neither were factory workers who saw their jobs automated and outsourced. Some Bushmen will adapt and thrive. Nelson Mandela was part Bushman, and he did quite well for himself. But many will suffer.
What to do about the suffering of those left behind–those who cannot cope with change, who do not have the mental or physical capacity to “learn to code” or otherwise adapt remains an unanswered question. Humanity might move on without them, ignoring their suffering because we find them undeserving of compassion–or we might get bogged down trying to save them all. Perhaps we can find a third route: sympathy for the unfortunate without encouraging obsolete behavior?
In The Great Degeneration, Ferguson wonders why the systems (“code”) that supports our society appears to be degenerating. I have a crude but answer: people are getting stupider. It takes a certain amount of intelligence to run a piece of code. Even a simple task like transcribing numbers is better performed by a smarter person than a dumber person, who is more likely to accidentally write down the wrong number. Human systems are built and executed by humans, and if the humans in them are less intelligent than the ones who made them, then they will do a bad job of running the systems.
Unfortunately for those of us over in civilization, dysgenics is a real thing:
Whether you blame IQ itself or the number of years smart people spend in school, dumb people have more kids (especially the parents of the Baby Boomers.) Epigone here only looks at white data (I believe Jayman has the black data and it’s just as bad, if not worse.)
Of course we can debate about the Flynn effect and all that, but I suspect there two competing things going on: First, a rising 50’s economic tide lifted all boats, making everyone healthier and thus smarter and better at taking IQ tests and making babies, and second, declining infant mortality since the late 1800s and possibly the Welfare state made it easier for the children of the poorest and least capable parents to survive.
The effects of these two trends probably cancel out at first, but after a while you run out of Flynn effect (maybe) and then the other starts to show up. Eventually you get Greece: once the shining light of Civilization, now defaulting on its loans.
Well, we have made it a page in!
What do you think of the book? Have you finished it yet? What do you think of the way Auersbach conceptualizes of “code” and its basis as the building block of pretty much all human activity? Do you think Auersbach is essentially correct to be hopeful about our increasingly code-driven future, or should we beware of the tradeoffs to individual autonomy and freedom inherent in becoming a glorified colony of ants?
Pediculus humanus humanus (the body louse) is indistinguishable in appearance from Pediculus humanus capitis (the head louse) but will interbreed only under laboratory conditions. In their natural state, they occupy different habitats and do not usually meet. In particular, body lice have evolved to attach their eggs to clothes, whereas head lice attach their eggs to the base of hairs.
So when did the clothes-infesting body louse decide to stop associating with its hair-clinging cousins?
The body louse diverged from the head louse at around 100,000 years ago, hinting at the time of the origin of clothing.
So, did Neanderthals have clothes? Or did they survive winters in ice age Europe by being really hairy?
Behavioral modernity–such as intentional burials and cave painting–is thought to have emerged around 50,000 years ago. Some people push this date back to 80,000 years ago, possibly just before the Out of Africa event (something that made people smarter and better at making tools may have been necessary for OOA to succeed.)
But perhaps we should consider the invention of clothing alongside other technological breakthroughs that made us modern–after all, I don’t think we hairless apes could have had much success at conquering the planet without clothes.
(On the other hand, other Wikipedia pages give other estimates for the origin of clothing, some even also citing louse studies, so I’m not sure of the 100k YA date, but surely clothes were invented before we went anywhere cold.)
Oddly, though, there appears to have been at least one human group that managed to survive in a cold climate without much in the way of clothes, the Yaghan people of Tierra del Fuego. In fact, the whole reason the region got named Tierra del Fuego (translation: Land of the fire) is because the nearly-naked locals carried fire with them wherever they went to stay warm.
Only 100-1,600 Yaghans remain; their language is an isolate with only one native speaker, and she’s 89 years old.
Unfortunately, searching for “people with no clothes” does not return any useful information about other groups that might have led similar lifestyles.
Native Americans appear to also carry a strain of head lice that had previously occupied Homo erectus’s hair, suggesting that H.e. and the ancestors of today’s N.A.s once met. Since these lice aren’t found elsewhere, it’s evidence that H. e. might have survived somewhere out there until fairly recently.
The smartest non-human primates, like Kanzi the bonobo and Koko the gorilla, understand about 2,000 to 4,000 words. Koko can make about 1,000 signs in sign language and Kanzi can use about 450 lexigrams (pictures that stand for words.) Koko can also make some onomatopoetic words–that is, she can make and use imitative sounds in conversation.
A four year human knows about 4,000 words, similar to an exceptional gorilla. An adult knows about 20,000-35,000 words. (Another study puts the upper bound at 42,000.)
Somewhere along our journey from ape-like hominins to homo sapiens sapiens, our ancestors began talking, but exactly when remains a mystery. The origins of writing have been amusingly easy to discover, because early writers were fond of very durable surfaces, like clay, stone, and bone. Speech, by contrast, evaporates as soon as it is heard–leaving no trace for archaeologists to uncover.
But we can find the things necessary for speech and the things for which speech, in turn, is necessary.
The main reason why chimps and gorillas, even those taught human language, must rely on lexigrams or gestures to communicate is that their voiceboxes, lungs, and throats work differently than ours. Their semi-arborial lifestyle requires using the ribs as a rigid base for the arm and shoulder muscles while climbing, which in turn requires closing the lungs while climbing to provide support for the ribs.
Full bipedalism released our early ancestors from the constraints on airway design imposed by climbing, freeing us to make a wider variety of vocalizations.
Now is the perfect time to break out my file of relevant human evolution illustrations:
We humans split from our nearest living ape relatives about 7-8 million years ago, but true bipedalism may not have evolved for a few more million years. Since there are many different named hominins, here is a quick guide:
Australopithecines (light blue in the graph,) such as the famous Lucy, are believed to have been the first fully bipedal hominins, although, based on the shape of their toes, they may have still occasionally retreated into the trees. They lived between 4 and 2 million years ago.
Without delving into the myriad classification debates along the lines of “should we count this set of skulls as a separate species or are they all part of the natural variation within one species,” by the time the homo genus arises with H Habilis or H. Rudolfensis around 2.8 million years ag, humans were much worse at climbing trees.
Interestingly, one direction humans have continued evolving in is up.
The reliable production of stone tools represents an enormous leap forward in human cognition. The first known stone tools–Oldowan–are about 2.5-2.6 million years old and were probably made by homo Habilis. These simple tools are typically shaped only one one side.
By the Acheulean–1.75 million-100,000 years ago–tool making had become much more sophisticated. Not only did knappers shape both sides of both the tops and bottoms of stones, but they also made tools by first shaping a core stone and then flaking derivative pieces from it.
The first Acheulean tools were fashioned by h Erectus; by 100,000 years ago, h Sapiens had presumably taken over the technology.
Flint knapping is surprisingly difficult, as many an archaeology student has discovered.
These technological advances were accompanied by steadily increasing brain sizes.
I propose that the complexities of the Acheulean tool complex required some form of language to facilitate learning and teaching; this gives us a potential lower bound on language around 1.75 million years ago. Bipedalism gives us an upper bound around 4 million years ago, before which our voice boxes were likely more restricted in the sounds they could make.
A Different View
Even though “homo Sapiens” has been around for about 300,000 years (or so we have defined the point where we chose to differentiate between our species and the previous one,) “behavioral modernity” only emerged around 50,000 years ago (very awkward timing if you know anything about human dispersal.)
Everything about behavioral modernity is heavily contested (including when it began,) but no matter how and when you date it, compared to the million years or so it took humans to figure out how to knap the back side of a rock, human technologic advance has accelerated significantly over the past 100,000 and even moreso over the past 50,000 and even 10,000.
Fire was another of humanity’s early technologies:
Claims for the earliest definitive evidence of control of fire by a member of Homo range from 1.7 to 0.2 million years ago (Mya). Evidence for the controlled use of fire by Homo erectus, beginning some 600,000 years ago, has wide scholarly support. Flint blades burned in fires roughly 300,000 years ago were found near fossils of early but not entirely modern Homo sapiens in Morocco. Evidence of widespread control of fire by anatomically modern humans dates to approximately 125,000 years ago.
What prompted this sudden acceleration? Noam Chomsky suggests that it was triggered by the evolution of our ability to use and understand language:
Noam Chomsky, a prominent proponent of discontinuity theory, argues that a single chance mutation occurred in one individual in the order of 100,000 years ago, installing the language faculty (a component of the mind–brain) in “perfect” or “near-perfect” form.
More specifically, we might say that this single chance mutation created the capacity for figurative or symbolic language, as clearly apes already have the capacity for very simple language. It was this ability to convey abstract ideas, then, that allowed humans to begin expressing themselves in other abstract ways, like cave painting.
I disagree with this view on the grounds that human groups were already pretty widely dispersed by 100,000 years ago. For example, Pygmies and Bushmen are descended from groups of humans who had already split off from the rest of us by then, but they still have symbolic language, art, and everything else contained in the behavioral modernity toolkit. Of course, if a trait is particularly useful or otherwise successful, it can spread extremely quickly (think lactose tolerance,) and neither Bushmen nor Pygmies were 100% genetically isolated for the past 250,000 years, but I simply think the math here doesn’t work out.
However, that doesn’t mean Chomsky isn’t on to something. For example, Johanna Nichols (another linguist,) used statistical models of language differentiation to argue that modern languages split around 100,000 years ago. This coincides neatly with the upper bound on the Out of Africa theory, suggesting that Nichols may actually have found the point when language began differentiating because humans left Africa, or perhaps she found the origin of the linguistic skills necessary to accomplish humanity’s cross-continental trek.
In normal adults these two portions of the SVT form a right angle to one another and are approximately equal in length—in a 1:1 proportion. Movements of the tongue within this space, at its midpoint, are capable of producing tenfold changes in the diameter of the SVT. These tongue maneuvers produce the abrupt diameter changes needed to produce the formant frequencies of the vowels found most frequently among the world’s languages—the “quantal” vowels [i], [u], and [a] of the words “see,” “do,” and “ma.” In contrast, the vocal tracts of other living primates are physiologically incapable of producing such vowels.
(Since juvenile humans are shaped differently than adults, they pronounce sounds slightly differently until their voiceboxes fully develop.)
…Neanderthal necks were too short and their faces too long to have accommodated equally proportioned SVTs. Although we could not reconstruct the shape of the SVT in the Homo erectus fossil because it does not preserve any cervical vertebrae, it is clear that its face (and underlying horizontal SVT) would have been too long for a 1:1 SVT to fit into its head and neck. Likewise, in order to fit a 1:1 SVT into the reconstructed Neanderthal anatomy, the larynx would have had to be positioned in the Neanderthal’s thorax, behind the sternum and clavicles, much too low for effective swallowing. …
Surprisingly, our reconstruction of the 100,000-year-old specimen from Israel, which is anatomically modern in most respects, also would not have been able to accommodate a SVT with a 1:1 ratio, albeit for a different reason. … Again, like its Neanderthal relatives, this early modern human probably had an SVT with a horizontal dimension longer than its vertical one, translating into an inability to reproduce the full range of today’s human speech.
It was only in our reconstruction of the most recent fossil specimens—the modern humans postdating 50,000 years— that we identified an anatomy that could have accommodated a fully modern, equally proportioned vocal tract.
Just as small children who can’t yet pronounce the letter “r” can nevertheless make and understand language, I don’t think early humans needed to have all of the same sounds as we have in order to communicate with each other. They would have just used fewer sounds.
The change in our voiceboxes may not have triggered the evolution of language, but been triggered by language itself. As humans began transmitting more knowledge via language, humans who could make more sounds could utter a greater range of words perhaps had an edge over their peers–maybe they were seen as particularly clever, or perhaps they had an easier time organizing bands of hunters and warriors.
One of the interesting things about human language is that it is clearly simultaneously cultural–which language you speak is entirely determined by culture–and genetic–only humans can produce language in the way we do. Even the smartest chimps and dolphins cannot match our vocabularies, nor imitate our sounds. Human infants–unless they have some form of brain damage–learn language instinctually, without conscious teaching. (Insert reference to Steven Pinker.)
Some kind of genetic changes were obviously necessary to get from apes to human language use, but exactly what remains unclear.
A variety of genes are associated with language use, eg FOXP2. H Sapiens and chimps have different versions of the FOXP2 gene, (and Neanderthals have a third, but more similar to the H Sapiens version than the chimp,) but to my knowledge we have yet to discover exactly when the necessary mutations arose.
Despite their impressive skulls and survival in a harsh, novel climate, Neanderthals seem not to have engaged in much symbolic activity, (though to be fair, they were wiped out right about the time Sapiens really got going with its symbolic activity.) Homo Sapiens and Homo Nanderthalis split around 800-400,000 years ago–perhaps the difference in our language genes ultimately gave Sapiens the upper hand.
Just as farming appears to have emerged relatively independently in several different locations around the world at about the same time, so behavioral modernity seems to have taken off in several different groups around the same time. Of course we can’t rule out the possibility that these groups had some form of contact with each other–peaceful or otherwise–but it seems more likely to me that similar behaviors emerged in disparate groups around the same time because the cognitive precursors necessary for those behaviors had already begun before they split.
Based on genetics, the shape of their larynges, and their cultural toolkits, Neanderthals probably did not have modern speech, but they may have had something similar to it. This suggests that at the time of the Sapiens-Neanderthal split, our common ancestor possessed some primitive speech capacity.
By the time Sapiens and Neanderthals encountered each other again, nearly half a million years later, Sapiens’ language ability had advanced, possibly due to further modification of FOXP2 and other genes like it, plus our newly modified voiceboxes, while Neanderthals’ had lagged. Sapiens achieved behavioral modernity and took over the planet, while Neanderthals disappeared.
North Africa is an often misunderstood region in human genetics. Since it is in Africa, people often assume that it contains the same variety of people referenced in terms like “African Americans,” “black Africans,” or even just “Africans.” In reality, the African content contains members of all three of the great human clades–Sub-Saharan Africans in the south, Polynesians (Asian clade) in Madagascar, and Caucasians in the north.
Throughout most of human history, the Sahara–not the Mediterranean or Red seas–has been the biggest local impediment to human migration–thus North Africans are much closer, genetically, to their neighbors in Europe and the Middle East than their neighbors across the desert (and before the domestication of the camel, about 3,000 years ago, the Sahara was even harder to cross.)
But from time to time, global weather patterns change and the Sahara becomes a garden: the Green Sahara. The last time we had a Green Sahara was about 9-7,000 years ago; during this time, people lived, hunted, fished, herded and perhaps farmed throughout areas that are today nearly uninhabited wastes.
In order to investigate the role of the last Green Sahara in the peopling of Africa, we deep-sequence the whole non-repetitive portion of the Y chromosome in 104 males selected as representative of haplogroups which are currently found to the north and to the south of the Sahara. … We find that the coalescence age of the trans-Saharan haplogroups dates back to the last Green Sahara, while most northern African or sub-Saharan clades expanded locally in the subsequent arid phase. …
Our findings suggest that the Green Sahara promoted human movements and demographic expansions, possibly linked to the adoption of pastoralism. Comparing our results with previously reported genome-wide data, we also find evidence for a sex-biased sub-Saharan contribution to northern Africans, suggesting that historical events such as the trans-Saharan slave trade mainly contributed to the mtDNA and autosomal gene pool, whereas the northern African paternal gene pool was mainly shaped by more ancient events.
In other words, modern North Africans have some maternal (female) Sub-Saharan DNA that arrived recently via the Islamic slave trade, but most of their Sub-Saharan Y-DNA (male) is much older, hailing from the last time the Sahara was easy to cross.
Note that not much DNA is shared across the Sahara:
After the African humid period, the climatic conditions became rapidly hyper-arid and the Green Sahara was replaced by the desert, which acted as a strong geographic barrier against human movements between northern and sub-Saharan Africa.
A consequence of this is that there is a strong differentiation in the Y chromosome haplogroup composition between the northern and sub-Saharan regions of the African continent. In the northern area, the predominant Y lineages are J-M267 and E-M81, with the former being linked to the Neolithic expansion in the Near East and the latter reaching frequencies as high as 80 % in some north-western populations as a consequence of a very recent local demographic expansion [8–10]. On the contrary, sub-Saharan Africa is characterised by a completely different genetic landscape, with lineages within E-M2 and haplogroup B comprising most of the Y chromosomes. In most regions of sub-Saharan Africa, the observed haplogroup distribution has been linked to the recent (~ 3 kya) demic diffusion of Bantu agriculturalists, which brought E-M2 sub-clades from central Africa to the East and to the South [11–17]. On the contrary, the sub-Saharan distribution of B-M150 seems to have more ancient origins, since its internal lineages are present in both Bantu farmers and non-Bantu hunter-gatherers and coalesce long before the Bantu expansion [18–20].
In spite of their genetic differentiation, however, northern and sub-Saharan Africa share at least four patrilineages at different frequencies, namely A3-M13, E-M2, E-M78 and R-V88.
Here, by using whole Y chromosome sequences, we intend to shed some light on the historical and demographic processes that modelled the genetic landscape of North Africa. Previous studies suggested that the strategic location of North Africa, separated from Europe by the Mediterranean Sea, from the rest of the African continent by the Sahara Desert and limited to the East by the Arabian Peninsula, has shaped the genetic complexity of current North Africans15,16,17. Early modern humans arrived in North Africa 190–140 kya (thousand years ago)18, and several cultures settled in the area before the Holocene. In fact, a previous study by Henn et al.19 identified a gradient of likely autochthonous North African ancestry, probably derived from an ancient “back-to-Africa” gene flow prior to the Holocene (12 kya). In historic times, North Africa has been populated successively by different groups, including Phoenicians, Romans, Vandals and Byzantines. The most important human settlement in North Africa was conducted by the Arabs by the end of the 7th century. Recent studies have demonstrated the complexity of human migrations in the area, resulting from an amalgam of ancestral components in North African groups15,20.
According to the article, E-M81 is dominant in Northwest Africa and absent almost everywhere else in the world.
The authors tested various men across north Africa in order to draw up a phylogenic tree of the branching of E-M183:
The distribution of each subhaplogroup within E-M183 can be observed in Table 1 and Fig. 2. Indeed, different populations present different subhaplogroup compositions. For example, whereas in Morocco almost all subhaplogorups are present, Western Sahara shows a very homogeneous pattern with only E-SM001 and E-Z5009 being represented. A similar picture to that of Western Sahara is shown by the Reguibates from Algeria, which contrast sharply with the Algerians from Oran, which showed a high diversity of haplogroups. It is also worth to notice that a slightly different pattern could be appreciated in coastal populations when compared with more inland territories (Western Sahara, Algerian Reguibates).
Overall, the authors found that the haplotypes were “strikingly similar” to each other and showed little geographic structure besides the coastal/inland differences:
As proposed by Larmuseau et al.25, the scenario that better explains Y-STR haplotype similarity within a particular haplogroup is a recent and rapid radiation of subhaplogroups. Although the dating of this lineage has been controversial, with dates proposed ranging from Paleolithic to Neolithic and to more recent times17,22,28, our results suggested that the origin of E-M183 is much more recent than was previously thought. … In addition to the recent radiation suggested by the high haplotype resemblance, the pattern showed by E-M183 imply that subhaplogroups originated within a relatively short time period, in a burst similar to those happening in many Y-chromosome haplogroups23.
In other words, someone went a-conquering.
Alternatively, given the high frequency of E-M183 in the Maghreb, a local origin of E-M183 in NW Africa could be envisaged, which would fit the clear pattern of longitudinal isolation by distance reported in genome-wide studies15,20. Moreover, the presence of autochthonous North African E-M81 lineages in the indigenous population of the Canary Islands, strongly points to North Africa as the most probable origin of the Guanche ancestors29. This, together with the fact that the oldest indigenous inviduals have been dated 2210 ± 60 ya, supports a local origin of E-M183 in NW Africa. Within this scenario, it is also worth to mention that the paternal lineage of an early Neolithic Moroccan individual appeared to be distantly related to the typically North African E-M81 haplogroup30, suggesting again a NW African origin of E-M183. A local origin of E-M183 in NW Africa > 2200 ya is supported by our TMRCA estimates, which can be taken as 2,000–3,000, depending on the data, methods, and mutation rates used.
However, the authors also note that they can’t rule out a Middle Eastern origin for the haplogroup since their study simply doesn’t include genomes from Middle Eastern individuals. They rule out a spread during the Neolithic expansion (too early) but not the Islamic expansion (“an extensive, male-biased Near Eastern admixture event is registered ~1300 ya, coincidental with the Arab expansion20.”) Alternatively, they suggest E-M183 might have expanded near the end of the third Punic War. Sure, Carthage (in Tunisia) was defeated by the Romans, but the era was otherwise one of great North African wealth and prosperity.
Interesting papers! My hat’s off to the authors. I hope you enjoyed them and get a chance to RTWT.
I was really excited about this book when I picked it up at the library. It has the word “numbers” on the cover and a subtitle that implies a story about human cultural and cognitive evolution.
Regrettably, what could have been a great books has turned out to be kind of annoying. There’s some fascinating information in here–for example, there’s a really interesting part on pages 249-252–but you have to get through pages 1-248 to get there. (Unfortunately, sometimes authors put their most interesting bits at the end so that people looking to make trouble have gotten bored and wandered off by then.)
I shall try to discuss/quote some of the book’s more interesting bits, and leave aside my differences with the author (who keeps reiterating his position that mathematical ability is entirely dependent on the culture you’re raised in.) Everett nonetheless has a fascinating perspective, having actually spent much of his childhood in a remote Amazonian village belonging to the Piraha, who have no real words for numbers. (His parents were missionaries.)
Which languages contain number words? Which don’t? Everett gives a broad survey:
“…we can reach a few broad conclusions about numbers in speech. First, they are common to nearly all of the world’s languages. … this discussion has shown that number words, across unrelated language, tend to exhibit striking parallels, since most languages employ a biologically based body-part model evident in their number bases.”
That is, many languages have words that translate essentially to “One, Two, Three, Four, Hand, … Two hands, (10)… Two Feet, (20),” etc., and reflect this in their higher counting systems, which can end up containing a mix of base five, 10, and 20. (The Romans, for example, used both base five and ten in their written system.)
“Third, the linguistic evidence suggests not only that this body-part model has motivated the innovation of numebers throughout the world, but also that this body-part basis of number words stretches back historically as far as the linguistic data can take us. It is evident in reconstruction of ancestral languages, including Proto-Sino-Tibetan, Proto-Niger-Congo, Proto-Autronesian, and Proto-Indo-European, the languages whose descendant tongues are best represented in the world today.”
Note, though, that linguistics does not actually give us a very long time horizon. Proto-Indo-European was spoken about 4-6,000 years ago. Proto-Sino-Tibetan is not as well studied yet as PIE, but also appears to be at most 6,000 years old. Proto-Niger-Congo is probably about 5-6,000 years old. Proto-Austronesian (which, despite its name, is not associated with Australia,) is about 5,000 years old.
These ranges are not a coincidence: languages change as they age, and once they have changed too much, they become impossible to classify into language families. Older languages, like Basque or Ainu, are often simply described as isolates, because we can’t link them to their relatives. Since humanity itself is 200,000-300,000 years old, comparative linguistics only opens a very short window into the past. Various groups–like the Amazonian tribes Everett studies–split off from other groups of humans thousands 0r hundreds of thousands of years before anyone started speaking Proto-Indo-European. Even agriculture, which began about 10,000-15,000 years ago, is older than these proto-languages (and agriculture seems to have prompted the real development of math.)
I also note these language families are the world’s biggest because they successfully conquered speakers of the world’s other languages. Spanish, Portuguese, and English are now widely spoken in the Americas instead of Cherokee, Mayan, and Nheengatu because Indo-European language speakers conquered the speakers of those languages.
The guy with the better numbers doesn’t always conquer the guy with the worse numbers–the Mongol conquest of China is an obvious counter. But in these cases, the superior number system sticks around, because no one wants to replace good numbers with bad ones.
In general, though, better tech–which requires numbers–tends to conquer worse tech.
Which means that even though our most successful language families all have number words that appear to be about 4-6,000 years old, we shouldn’t assume this was the norm for most people throughout most of history. Current human numeracy may be a very recent phenomenon.
“The invention of number is attainable by the human mind but is attained through our fingers. Linguistic data, both historical and current, suggest that numbers in disparate cultures have arisen independently, on an indeterminate range of occasions, through the realization that hands can be used to name quantities like 5 and 10. … Words, our ultimate implements for abstract symbolization, can thankfully be enlisted to denote quantities. But they are usually enlisted only after people establish a more concrete embodied correspondence between their finger sand quantities.”
Some more on numbers in different languages:
“Rare number bases have been observed, for instance, in the quaternary (base-4) systems of Lainana languages of California, or in the senary (base-6) systems that are found in southern New Guinea. …
Several languages in Melanesia and Polynesia have or once had number system that vary in accordance with the type of object being counted. In the case of Old High Fijian, for instance, the word for 100 was Bola when people were counting canoes, but Kora when they were counting coconuts. …
some languages in northwest Amazonia base their numbers on kinship relationships. This is true of Daw and Hup two related language in the region. Speakers of the former languages use fingers complemented with words when counting from 4 to 10. The fingers signify the quantity of items being counted, but words are used to denote whether the quantity is odd or even. If the quantity is even, speakers say it “has a brother,” if it is odd they state it “has no brother.”
What about languages with no or very few words for numbers?
In one recent survey of limited number system, it was found that more than a dozen languages lack bases altogether, and several do not have words for exact quantities beyond 2 and, in some cases, beyond 1. Of course, such cases represent a miniscule fraction of the world’s languages, the bulk of which have number bases reflecting the body-part model. Furthermore, most of the extreme cases in question are restricted geographically to Amazonia. …
All of the extremely restricted languages, I believe, are used by people who are hunter-gatherers or horticulturalists, eg, the Munduruku. Hunter gatherers typically don’t have a lot of goods to keep track of or trade, fields to measure or taxes to pay, and so don’t need to use a lot of numbers. (Note, however, that the Inuit/Eskimo have a perfectly normal base-20 counting system. Their particularly harsh environment appears to have inspired both technological and cultural adaptations.) But why are Amazonian languages even less numeric than those of other hunter-gatherers from similar environments, like central African?
Famously, most of the languages of Australia have somewhat limited number system, and some linguists previously claimed that most Australian language slack precise terms for quantities beyond 2…. [however] many languages on that continent actually have native means of describing various quantities in precise ways, and their number words for small quantities can sometimes be combined to represent larger quantities via the additive and even multiplicative usage of bases. …
Of the nearly 200 Australian languages considered in the survey, all have words to denote 1 and 2. In about three-quarters of the languages, however, the highest number is 3 or 4. Still, may of the languages use a word for “two” as a base for other numbers. Several of the languages use a word for “five” as a base, an eight of the languages top out at a word for “ten.”
Everett then digresses into what initially seems like a tangent about grammatical number, but luckily I enjoy comparative linguistics.
In an incredibly comprehensive survey of 1,066 languages, linguist Matthew Dryer recently found that 98 of them are like Karitiana and lack a grammatical means of marking nouns of being plural. So it is not particularly rare to find languages in which numbers do not show plurality. … about 90% of them, have a grammatical means through which speakers can convey whether they are talking about one or more than one thing.
Mandarin is a major language that has limited expression of plurals. According to Wikipedia:
Some languages, such as modern Arabic and Proto-Indo-European also have a “dual” category distinct from singular or plural; an extremely small set of languages have a trial category.
Many languages also change their verbs depending on how many nouns are involved; in English we say “He runs; they run;” languages like Latin or Spanish have far more extensive systems.
In sum: the vast majority of languages distinguish between 1 and more than one; a few distinguish between one, two, and many, and a very few distinguish between one, two, three, and many.
From the endnotes:
… some controversial claims of quadral markers, used in restricted contexts, have been made for the Austronesian languages Tangga, Marshallese, and Sursurunga. .. As Corbett notes in his comprehensive survey, the forms are probably best considered quadral markers. In fact, his impressive survey did not uncover any cases of quadral marking in the world’s languages.
Everett tends to bury his point; his intention in this chapter is to marshal support for the idea that humans have an “innate number sense” that allows them to pretty much instantly realize if they are looking at 1, 2, or 3 objects, but does not allow for instant recognition of larger numbers, like 4. He posits a second, much vaguer number sense that lets us distinguish between “big” and “small” amounts of things, eg, 10 looks smaller than 100, even if you can’t count.
He does cite actual neuroscience on this point–he’s not just making it up. Even newborn humans appear to be able to distinguish between 1, 2, and 3 of something, but not larger numbers. They also seem to distinguish between some and a bunch of something. Anumeric peoples, like the Piraha, also appear to only distinguish between 1, 2, and 3 items with good accuracy, though they can tell “a little” “some” and “a lot” apart. Everett also cites data from animal studies that find, similarly, that animals can distinguish 1, 2, and 3, as well as “a little” and “a lot”. (I had been hoping for a discussion of cephalopod intelligence, but unfortunately, no.)
How then, Everett asks, do we wed our specific number sense (1, 2, and 3) with our general number sense (“some” vs “a lot”) to produce ideas like 6, 7, and a googol? He proposes that we have no innate idea of 6, nor ability to count to 10. Rather, we can count because we were taught to (just as some highly trained parrots and chimps can.) It is only the presence of number words in our languages that allows us to count past 3–after all, anumeric people cannot.
But I feel like Everett is railroading us to a particular conclusion. For example, he sites neurology studies that found one part of the brain does math–the intraparietal suclus (IPS)–but only one part? Surely there’s more than one part of the brain involved in math.
The IPS turns out to be part of the extensive network of brain areas that support human arithmetic (Figure 1). Like all networks it is distributed, and it is clear that numerical cognition engages perceptual, motor, spatial and mnemonic functions, but the hub areas are the parietal lobes …
(By contrast, I’ve spent over half an hour searching and failing to figure out how high octopuses can count.)
Moreover, I question the idea that the specific and general number senses are actually separate. Rather, I suspect there is only one sense, but it is essentially logarithmic. For example, hearing is logarithmic (or perhaps exponential,) which is why decibels are also logarithmic. Vision is also logarithmic:
The eye senses brightness approximately logarithmically over a moderate range (but more like a power law over a wider range), and stellar magnitude is measured on a logarithmic scale. This magnitude scale was invented by the ancient Greek astronomer Hipparchus in about 150 B.C. He ranked the stars he could see in terms of their brightness, with 1 representing the brightest down to 6 representing the faintest, though now the scale has been extended beyond these limits; an increase in 5 magnitudes corresponds to a decrease in brightness by a factor of 100. Modern researchers have attempted to incorporate such perceptual effects into mathematical models of vision.
So many experiments have revealed logarithmic responses to stimuli that someone has formulated a mathematical “law” on the matter:
Fechner’s law states that the subjective sensation is proportional to the logarithm of the stimulus intensity. According to this law, human perceptions of sight and sound work as follows: Perceived loudness/brightness is proportional to logarithm of the actual intensity measured with an accurate nonhuman instrument.
p = k ln S S 0
The relationship between stimulus and perception is logarithmic. This logarithmic relationship means that if a stimulus varies as a geometric progression (i.e., multiplied by a fixed factor), the corresponding perception is altered in an arithmetic progression (i.e., in additive constant amounts). For example, if a stimulus is tripled in strength (i.e., 3 x 1), the corresponding perception may be two times as strong as its original value (i.e., 1 + 1). If the stimulus is again tripled in strength (i.e., 3 x 3 x 3), the corresponding perception will be three times as strong as its original value (i.e., 1 + 1 + 1). Hence, for multiplications in stimulus strength, the strength of perception only adds. The mathematical derivations of the torques on a simple beam balance produce a description that is strictly compatible with Weber’s law.
In any logarithmic scale, small quantities–like 1, 2, and 3–are easy to distinguish, while medium quantities–like 101, 102, and 103–get lumped together as “approximately the same.”
Of course, this still doesn’t answer the question of how people develop the ability to count past 3, but this is getting long, so we’ll continue our discussion next week.
I’m about halfway through Caleb Everett’s Numbers and the Making of Us: Counting and the Course of Human Cultures. Everett begins the book with a lengthy clarification that he thinks everyone in the world has equal math abilities, some of us just happen to have been exposed to more number ideas than others. Once that’s out of the way, the book gets interesting.
When did humans invent numbers? It’s hard to say. We have notched sticks from the Paleolithic, but no way to tell if these notches were meant to signify numbers or were just decorated.
The slightly more recent Ishango, Lebombo, and Wolf bones (30,000 YA, Czech Republic) seem more likely to indicate that someone was at least counting–if not keeping track–of something.
The Ishango bone (estimated 20,000 years old, found in the Democratic Republic of the Congo near the headwaters of the Nile,) has three sets of notches–two sets total to 60, the third to 48. Interestingly, the notches are grouped, with both sets of sixty composed of primes: 19 + 17 + 13 + 11 and 9 + 19 + 21 + 11. The set of 48 contains groups of 3, 6, 4, 8, 10, 5, 5, and 7. Aside from the stray seven, the sequence tantalizingly suggests that someone was doubling numbers.
The Ishango bone also has a quartz point set into the end, which perhaps allowed it to be used for scraping, drawing, or etching–or perhaps it just looked nice atop someone’s decorated bone.
The Lebombo bone, (estimated 43-44,2000 years old, found near the border between South Africa and Swaziland,) is quite similar to the Ishango bone, but only contains 29 notches (as far as we can tell–it’s broken.)
I’ve seen a lot of people proclaiming “Scientists think it was used to keep track of menstrual cycles. Menstruating African women were the first mathematicians!” so I’m just going to let you in on a little secret: scientists have no idea what it was for. Maybe someone was just having fun putting notches on a bone. Maybe someone was trying to count all of their relatives. Maybe someone was counting days between new and full moons, or counting down to an important date.
Without a far richer archaeological assembly than one bone, we have no idea what this particular person might have wanted to count or keep track of. (Also, why would anyone want to keep track of menstrual cycles? You’ll know when they happen.)
The Wolf bone (30,000 years old, Czech Republic,) has received far less interest from folks interested in proclaiming that menstruating African women were the first mathematicians, but is a nice looking artifact with 60 notches–notches 30 and 31 are significantly longer than the others, as though marking a significant place in the counting (or perhaps just the middle of the pattern.)
Everett cites another, more satisfying tally stick: a 10,000 year old piece of antler found in the anoxic waters of Little Salt Spring, Florida. The antler contains two sets of marks: 28 (or possibly 29–the top is broken in a way that suggests another notch might have been a weak point contributing to the break) large, regular, evenly spaced notches running up the antler, and a much smaller set of notches set beside and just slightly beneath the first. It definitely looks like someone was ticking off quantities of something they wanted to keep track of.
Here’s an article with more information on Little Salt Spring and a good photograph of the antler.
I consider the bones “maybes” and the Little Salt Spring antler a definite for counting/keeping track of quantities.
Everett also mentions a much more recent and highly inventive tally system: the Incan quipu.
A quipu is made of knotted strings attached to one central string. A series of knots along the length of each string denotes numbers–one knot for 1, two for 2, etc. The knots are grouped in clusters, allowing place value–first cluster for the ones, second for the tens, third for hundreds, etc. (And a blank space for a zero.)
Thus a sequence of 2 knots, 4 knots, a space, and 5 knots = 5,402
The Incas, you see, had an empire to administer, no paper, but plenty of lovely alpaca wool. So being inventive people, they made do.
Everett then discusses the construction of names for numbers/base systems in different languages. Many languages use a combination of different bases, eg, “two twos” for four, (base 2,) “two hands” to signify 10 (base 5,) and from there, words for multiples of 10 or 20, (base 10 or 20,) can all appear in the same language. He argues convincingly that most languages derived their counting words from our original tally sticks: fingers and toes, found in quantities of 5, 10, and 20. So the number for 5 in a language might be “one hand”, the number for 10, “Two hands,” and the number for 20 “one person” (two hands + two feet.) We could express the number 200 in such a language by saying “two hands of one person”= 10 x 20.
(If you’re wondering how anyone could come up with a base 60 system, such as we inherited from the Babylonians for telling time, try using the knuckles of the four fingers on one hand  times the fingers of the other hand  to get 60.)
Which begs the question of what counts as a “number” word (numeral). Some languages, it is claimed, don’t have words for numbers higher than 3–but put out an array of 6 objects, and their speakers can construct numbers like “three twos.” Is this a number? What about the number in English that comes after twelve: four-teen, really just a longstanding mispronunciation of four and ten?
Perhaps a better question than “Do they have a word for it,” is “Do they have a common, easy to use word for it?” English contains the world nonillion, but you probably don’t use it very often (and according to the dictionary, a nonillion is much bigger in Britain than in the US, which makes it especially useless.) By contrast, you probably use quantities like a hundred or a thousand all the time, especially when thinking about household budgets.
Roman Numerals are really just an advanced tally system with two bases: 5 and 10. IIII are clearly regular tally marks. V (5) is similar to our practice of crossing through four tally marks. X (10) is two Vs set together. L (50) is a rotated V. C (100) is an abbreviation for the Roman word Centum, hundred. (I, V, X, and L are not abbreviations.) I’m not sure why 500 is D; maybe just because D follows C and it looks like a C with an extra line. M is short for Mille, or thousand. Roman numerals are also fairly unique in their use of subtraction in writing numbers, which few people do because it makes addition horrible. Eg, IV and VI are not the same number, nor do they equal 15 and 51. No, they equal 4 (v-1) and 6 (v+1,) respectively. Adding or multiplying large Roman numerals quickly becomes cumbersome; if you don’t believe me, try XLVII times XVIII with only a pencil and paper.
Now imagine you’re trying to run an empire this way.
You’re probably thinking, “At least those quipus had a zero and were reliably base ten,” about now.
Interestingly, the Mayans (and possibly the Olmecs) already had a proper symbol that they used for zero in their combination base-5/base-20 system with pretty functional place value at a time when the Greeks and Romans did not (the ancient Greeks were philosophically unsure about this concept of a “number that isn’t there.”)
(Note: given the level of sophistication of Native American civilizations like the Inca, Aztec, and Maya, and the fact that these developed in near total isolation, they must have been pretty smart. Their current populations appear to be under-performing relative to their ancestors.)
But let’s let Everett have a chance to speak:
Our increasingly refined means of survival and adaptation are the result of a cultural ratchet. This term, popularized by Duke University psychologist and primatologist Michael Tomasello, refers to the fact that humans cooperatively lock in knowledge from one generation to the next, like the clicking of a ratchet. In other word, our species’ success is due in large measure to individual members’ ability to learn from and emulate the advantageous behavior of their predecessors and contemporaries in their community. What makes humans special is not simply that we are so smart, it is that we do not have to continually come up with new solutions to the same old problems. …
Now this is imminently reasonable; I did not invent the calculus, nor could I have done so had it not already existed. Luckily for me, Newton and Leibniz already invented it and I live in a society that goes to great lengths to encode math in textbooks and teach it to students.
I call this “cultural knowledge” or “cultural memory,” and without it we’d still be monkeys with rocks.
The importance of gradually acquired knowledge stored in the community, culturally reified but not housed in the mind of any one individual, crystallizes when we consider cases in which entire cultures have nearly gone extinct because some of their stored knowledge dissipated due to the death of individuals who served as crucial nodes in their community’s knowledge network. In the case of the Polar Inuit of Northwest Greenland, population declined in the mid-nineteenth century after an epidemic killed several elders of the community. These elders were buried along with their tool sand weapons, in accordance with local tradition, and the Inuits’ ability to manufacture the tools and weapons in question was severely compromised. … As a result, their population did not recover until about 40 years later, when contact with another Inuit group allowed for the restoration of the communal knowledge base.
The first big advance, the one that separates us from the rest of the animal kingdom, was language itself. Yes, other animals can communicate–whales and birds sing; bees do their waggle dance–but only humans have full-fledged, generative language which allows us to both encode and decode new ideas with relative ease. Language lets different people in a tribe learn different things and then pool their ideas far more efficiently than mere imitation.
The next big leap was the development of visual symbols we could record–and read–on wood, clay, wax, bones, cloth, cave walls, etc. Everett suggests that the first of these symbols were likely tally marks such us those found on the Lebombo bone, though of course the ability to encode a buffalo on the wall of the Lascaux cave, France, was also significant. From these first symbols we developed both numbers and letters, which eventually evolved into books.
Books are incredible. Books are like external hard drives for your brain, letting you store, access, and transfer information to other people well beyond your own limits of memorization and well beyond a human lifetime. Books reach across the ages, allowing us to read what philosophers, poets, priests and sages were thinking about a thousand years ago.
Recently we invented an even more incredible information storage/transfer device: computers/the internet. To be fair, they aren’t as sturdy as clay tablets, (fired clay is practically immortal,) but they can handle immense quantities of data–and make it searchable, an incredibly important task.
But Everett tries to claim that cultural ratchet is all there is to human mathematical ability. If you live in a society with calculus textbooks, then you can learn calculus, and if you don’t, you can’t. Everett does not want to imply that Amazonian tribesmen with no words for numbers bigger than three are in any way less able to do math than the Mayans with their place value system and fancy zero.
But this seems unlikely for two reasons. First, we know very well that even in societies with calculus textbooks, not everyone can make use of them. Even among my own children, who have been raised with about as similar an environment as a human can make and have very similar genetics, there’s a striking difference in intellectual strengths and weaknesses. Humans are not identical in their abilities.
Moreover, we know that different mental tasks are performed in different, specialized parts of the brain. For example, we decode letters in the “visual word form area” of the brain; people whose VWAs have been damaged can still read, but they have to use different parts of their brains to work out the letters and they end up reading more slowly than they did before.
Memorably, before he died, the late Henry Harpending (of West Hunter) had a stroke while in Germany. He initially didn’t notice the stroke because it was located in the part of the brain that decodes letters into words, but since he was in Germany, he didn’t expect to read the words, anyway. It was only when he looked at something written in English later that day that he realized he couldn’t read it, and soon after I believe he passed out and was taken to the hospital.
Why should our brains have a VWA at all? It’s not like our primate ancestors did a whole lot of reading. It turns out that the VWA is repurposed from the part of our brain that recognizes faces :)
Likewise, there are specific regions of the brain that handle mathematical tasks. People who are better at math not only have more gray matter in these regions, but they also have stronger connections between them, letting the work together in harmony to solve different problems. We don’t do math by just throwing all of our mental power at a problem, but by routing it through specific regions of our brain.
Interestingly, humans and chimps differ in their ability to recognize faces and perceive emotions. (For anatomical reasons, chimps are more inclined to identify each other’s bottoms than each other’s faces.) We evolved the ability to recognize faces–the region of our brain we use to decode letters–when we began walking upright and interacting to each other face to face, though we do have some vestigial interest in butts and butt-like regions (“My eyes are up here.”) Our brains have evolved over the millenia to get better at specific tasks–in this case, face reading, a precursor to decoding symbolic language.
And there is a tremendous quantity of evidence that intelligence is at least partly genetic–estimates for the heritablity of intelligence range between 60 and 80%. The rest of the variation–the environmental part–looks to be essentially random chance, such as accidents, nutrition, or perhaps your third grade teacher.
So, yes, we absolutely can breed people for mathematical or linguistic ability, if that’s what the environment is selecting for. By contrast, if there have been no particular mathematical or linguistic section pressures in an environment (a culture with no written language, mathematical notation, and very few words for numbers clearly is not experiencing much pressure to use them), then you won’t select for such abilities. The question is not whether we can all be Newtons, (or Leibnizes,) but how many Newtons a society produces and how many people in that society have the potential to understand calculus, given the chance.
Just looking at the state of different societies around the world (including many indigenous groups that live within and have access to modern industrial or post-industrial technologies), there is clear variation in the average abilities of different groups to build and maintain complex societies. Japanese cities are technologically advanced, clean, and violence-free. Brazil, (which hasn’t even been nuked,) is full of incredibly violent, unsanitary, poorly-constructed favelas. Some of this variation is cultural, (Venezuela is doing particularly badly because communism doesn’t work,) or random chance, (Saudi Arabia has oil,) but some of it, by necessity, is genetic.
But if you find that a depressing thought, take heart: selective pressures can be changed. Start selecting for mathematical and verbal ability (and let everyone have a shot at developing those abilities) and you’ll get more mathematical and verbal abilities.
But this is getting long, so let’s continue our discussion next week.
Danna Staaf’s Squid Empire: The Rise and Fall of the Cephalopods is about the evolution of squids and their relatives–nautiluses, cuttlefish, octopuses, ammonoids, etc. If you are really into squids or would like to learn more about squids, this is the book for you. If you aren’t big on reading about squids but want something that looks nice on your coffee table and matches your Cthulhu, Flying Spaghetti Monster, and 20,000 Leagues Under the Sea decor, this is the book for you. If you aren’t really into squids, you probably won’t enjoy this book.
Squids, octopuses, etc. are members of the class of cephalopods, just as you are a member of the class of mammals. Mammals are in the phylum of chordates; cephalopods are mollusks. It’s a surprising lineage for one of Earth’s smartest creatures–80% mollusk species are slugs and snails. If you think you’re surrounded by idiots, imagine how squids must feel.
The short story of cephalopodic evolution is that millions upon millions of years ago, most life was still stuck at the bottom of the ocean. There were some giant microbial mats, some slugs, some snails, some worms, and not a whole lot else. One of those snails figured out how to float by removing some of the salt from the water inside its shell, making itself a bit buoyant. Soon after its foot (all mollusks have a “foot”) split into multiple parts. The now-floating snail drifted over the seafloor, using its new tentacles to catch and eat the less-mobile creatures below it.
From here, cephalopods diversified dramatically, creating the famous ammonoids of fossil-dating lore.
Ammonoids are known primarily from their shells (which fossilize well) rather than their fleshy tentacle parts, (which fossilize badly). But shells we have in such abundance they can be easily used for dating other nearby fossils.
Ammonoids are obviously similar to their cousins, the lovely chambered nautiluses. (Please don’t buy nautilus shells; taking them out of their shells kills them and no one farms nautiluses so the shell trade is having a real impact on their numbers. We don’t need their shells, but they do.)
Ammonoids succeeded for millions of years, until the Creatceous extinction event that also took out the dinosaurs. The nautiluses survived–as the author speculates, perhaps because they lay large eggs with much more yolk that develop very slowly, infant nautiluses were able to wait out the event while ammonoids, with their fast-growing, tiny eggs dependent on feeding immediately after hatching simply starved in the upheaval.
In the aftermath, modern squids and octopuses proliferated.
How did we get from floating, shelled snails to today’s squishy squids?
The first step was internalization–cephalopods began growing their fleshy mantles over their shells instead of inside of them–in essence, these invertebrates became vertebrates. Perhaps this was some horrible genetic accident, but it worked out. These internalized shells gradually became smaller and thinner, until they were reduced to a flexible rod called a “pen” that runs the length of a squid’s mantle. (Cuttlefish still retain a more substantial bone, which is frequently collected on beaches and sold for birds to peck at for its calcium.)
With the loss of the buoyant shell, squids had to find another way to float. This they apparently achieved by filling themselves with ammonia salts, which makes them less dense than water but also makes their decomposition disgusting and renders them unfossilizable because they turn to mush too quickly. Octopuses, by contrast, aren’t full of ammonia and so can fossilize.
Since the book is devoted primarily to cephalopod evolution rather than modern cephalopods, it doesn’t go into much depth on the subject of their intelligence. Out of all the invertebrates, cephalopods are easily the most intelligent (perhaps really the only intelligent invertebrates). Why? If cephalopods didn’t exist, we might easily conclude that invertebrates can’t be intelligent–invertebrateness is somehow inimical to intelligence. After all, most invertebrates are about as intelligent as slugs. But cephalopods do exist, and they’re pretty smart.
The obvious answer is that cephalopods can move and are predatory, which requires bigger brains. But why are they the only invertebrates–apparently–who’ve accomplished the task?
But enough jabber–let’s let Mrs. Staaf speak:
I find myself obliged to address the perennial question: “octopuses” or “octopi”? Or, heaven help us, “octopodes”?
Whichever you like best. Seriously. Despite what you may have heard, “octopus” is neither ancient Greek nor Latin. Aristotle called the animal polypous for its “many feet.” The ancient Romans borrowed this word and latinized the spelling to polypus. It was much later that a Renaissance scientists coined and popularized the word “octopus,” using Greek roots for “eight” and “foot” but Latin spelling.
If the word had actually been Greek, it would be spelled octopous and pluralized octopodes. If translated into Latin, it might have become octopes and pluralized octopedes, but more likely the ancient Roman would have simply borrowed the Greek word–as they did with poly pus. Those who perhaps wished to appear erudite used the Greek plural polypodes, while others favored a Latin ending and pluralized it polypi.
The latter is a tactic we English speakers emulate when we welcome “octopus” into our own language and pluralize it “octopuses” as I’ve chosen to do.
There. That settles it.
Dinosaurs are the poster children for evolution and extinction writ large…
Of course, not all of them did die. We know now that birds are simply modern dinosaurs, but out of habit we tend to reserve the word “dinosaur for the hug ancient creatures that went extinct at the end of the Cretaceous. After all, even if they had feathers, they seem so different from today’s finches and robins. For one thing, the first flying feathered dinosaurs all seem to have had four wings. There aren’t any modern birds with four wings.
Wesl… actually, domestic pigeons can be bred to grow feathers on their legs. Not fuzzy down, but long flight feathers, and along with these feathers their leg bones grow more winglike. The legs are still legs’ they can’t be used to fly like wings. They do, however, suggest a clear step along the road from four-winged dinosaurs to two-winged birds. The difference between pigeons with ordinary legs and pigeons with wing-legs is created by control switches in their DNA that alter the expression of two particular genes. These genes are found in all birds, indeed in all vertebrates,and so were most likely present in dinosaurs as well.
…and I’ve just discovered that almost all of my other bookmarks fell out of the book. Um.
So squid brains are shaped like donuts because their eating/jet propulsion tube runs through the middle of their bodies and thus through the middle of their brains. It seems like this could be a problem if the squid eats too much or eats something with sharp bits in it, but squids seem to manage.
Squids can also leap out of the water and fly through the air for some ways. Octopuses can carry water around in their mantles, allowing them to move on dry land for a few minutes without suffocating.
Since cephalopods are somewhat unique among mollusks for their ability to move quickly, they have a lot in common, genetically, with vertebrates. In essence, they are the most vertebrate-behaving of the mollusks. Convergent evolution.
The vampire squid, despite its name, is actually more of an octopus.
Let me quote from the chapter on sex and babies:
This is one arena in which cephalopods, both ancient and modern, are actually less alien than many aliens–even other mollusks. Slugs, for instance, are hermaphroditic, and in the course of impregnating each other their penises sometimes get tangled, so they chew them off. Nothing in the rest of this chapter will make you nearly that uncomfortable. …
In one living coleoid species, however, sex is blindingly obvious. Females of the octopus known as an argonaut are five times larger than males. (A killer whale is about five times larger than an average adult human, which in turn is about five times large than an opossum.)
This enormous size differential caught the attention of paleontologists who had noticed that many ammonoid species also came in two distinct size, which htey had dubbed microconch (little shell) and macroconch (big shell). Bot were clearly mature, as they had completed the juvenile part of the shell and constructed the final adult living chamber. After an initial flurry of debate, most researchers agreed to model ammonoid sex on modern argonauts, and began to call macroconchs females and microconcs males.
Some fossil nautiloids also come in macroconch and microchonch flavors, though it’s more difficult to be certain that both are adults…
However, the shells of modern nautiluses show the opposite pattern–males are somewhat large than females… Like the nautiloid shift from ten arms to many tens of arms, the pattern could certainly have evolved from a different ancestral condition. If we’re going to make that argument, though, we have to wonder when nautliloids switched from females to males as the larger sex, and why.
In modern species that have larger females, we usually assume the size difference has to do with making or brooding a lot of eggs.Female argonauts take it up a notch and actually secrete a shell-like brood chamber from their arms, using it to cradle numerous batch of eggs over their lifetime. meanwhile, each tiny male argonaut get ot mate only once. His hectocotylus is disposable–after being loaded with sperm and inserted into the female, it breaks off. …
By contrast, when males are the bigger sex, we often guess that the purpose is competition. Certainly many species of squid and cuttlefish have large males that battle for female attention on the mating grounds. They display outrageous skin patterns as they push, shove, and bite each other. Females do appear impressed; at least, they mate with the winning males and consent to be guarded by them. Even in these species, though, there are some mall males who exhibit a totally different mating strategy. While the big males strut their stuff, these small males quietly sidle up to the females, sometimes disguising themselves with female color patterns. This doesn’t put off the real females, who readily mate with these aptly named “sneaker males.” By their very nature, such obfuscating tactics are virtually impossible to glean from the fossil record…
In the majority of countries, women are more likely to be overweight than men (suggesting that our measure of “overweight” is probably flawed.) In some countries women are much more likely to be overweight, while in some countries men and women are almost equally likely to be overweight, and in just a few–the Czech Republic, Germany, Hungary, Japan, and barely France, men are more likely to be overweight.
Is there any rhyme or reason to this pattern? Surely affluence is related, but Japan, for all of its affluence, has very few overweight people at all, while Egypt, which is pretty poor, has far more overweight people. (A greater % of Egyptian women are overweight than American women, but American men are more likely to be overweight than Egyptian men.)
Of course, male humans are still–in every country–larger than females. Even an overweight female doesn’t necessarily weigh more than a regular male. But could the variation in male and female obesity rates have anything to do with historic mating strategies? Or is it completely irrelevant?
Back to the book:
Coleoid eyes are as complex as our own, with a lens for focusing light, a retina to detect it, and an iris to sharpen the image. … Despite their common complexity, though, there are some striking differences [between our and squid eyes]. For Example, our retina has a blind spot whee a bundle of nerves enters the eyeball before spreading out to connect to the font of every light receptor. By contrast, light receptors in the coleoid retina are innervated from behind, so there’s no “hole” or blind spot. Structural differences like this how that the two groups converged on similar solution through distinct evolutionary pathways.
Another significant difference is that fish went on to evolve color vision by increasing the variety of light-sensitive proteins in their eyes; coleoids never did and are probably color blind. I say “probably ” because the idea of color blindness in such colorful animals has flummoxed generations of scientists…
Color-blind or not, coleoids can definitely see something we humans are blind to: the polarization of light.
Sunlight normally consists of waves vibrating in all directions. but when these waves are reflected off certain surface, like water, they get organized and arrive at the retina vibrating in only one direction. We call this “glare” and we don’t like it, so we invented polarized sunglasses. … That’s pretty much all polarized sunglasses can do–block polaraized light. Sadly, they can’t help you decode the secret messages of cuttlefish, which have the ability to perform a sort of double0-talk with their skin, making color camouflage for the befit of polarization-blind predators while flashing polarized displays to their fellow cuttlefish.
Overall, I enjoyed this book. The writing isn’t the most thrilling, but the author has a sense of humor and a deep love for her subject. I recommend it to anyone with a serious hankering to know more about the evolution of squids, or who’d like to learn more about an ancient animal besides dinosaurs.
In previous posts, we discussed the evolution of Whites and Asians, so today we’re taking a look at people from Sub-Saharan Africa.
Modern humans only left Africa about 100,000 to 70,000 yeas ago, and split into Asians and Caucasians around 40,000 years ago. Their modern appearances came later–white skin, light hair and light eyes, for example, only evolved in the past 20,000 and possibly within the past 10,000 years.
What about the Africans, or specifically, Sub-Saharans? (North Africans, like Tunisians and Moroccans, are in the Caucasian clade.) When did their phenotypes evolve?
The Sahara, an enormous desert about the size of the United States, is one of the world’s biggest, most ancient barriers to human travel. The genetic split between SSAs and non-SSAs, therefore, is one of the oldest and most substantial among human populations. But there are even older splits within Africa–some of the ancestors of today’s Pygmies and Bushmen may have split off from other Africans 200,000-300,000 years ago. We’re not sure, because the study of archaic African DNA is still in its infancy.
The Bushmen present an interesting case, because their skin is quite light (for Africans.) I prefer to call it golden. The nearby Damara of Namibia, by contrast, are one of the world’s darkest peoples. (The peoples of South Sudan, eg Malik Agar, may be darker, though.) The Pygmies are the world’s shortest peoples; the peoples of South Sudan, such as the Dinka and Shiluk, are among the world’s tallest.
Sub-Saharan Africa’s ethnic groups can be grouped, very broadly, into Bushmen, Pygmies, Bantus (aka Niger-Congo), Nilotics, and Afro-Asiatics. Bushmen and Pygmies are extremely small groups, while Bantus dominate the continent–about 85% of Sub Saharan Africans speak a language from the Niger-Congo family. The Afro-Asiatic groups, as their name implies, have had extensive contact with North Africa and the Middle East.
Most of America’s black population hails from West Africa–that is, the primarily Bantu region. The Bantus and similar-looking groups among the Nilotics and Afro-Asiatics (like the Hausa) are, therefore, have both Africa’s most iconic and most common phenotypes.
For the sake of this post, we are not interested in the evolution of traits common to all humans, such as bipedalism. We are only interested in those traits generally shared by most Sub-Saharans and generally not shared by people outside of Africa.
One striking trait is black hair: it is distinctively “curly” or “frizzy.” Chimps and gorrilas do not have curly hair. Neither do whites and Asians. (Whites and Asians, therefore, more closely resemble chimps in this regard.) Only Africans and a smattering of other equatorial peoples like Melanesians have frizzy hair.
Black skin is similarly distinct. Chimps, who live in the shaded forest and have fur, do not have high levels of melanin all over their bodies. While chimps naturally vary in skin tone, an unfortunate, hairless chimp is practically “white.”
Humans therefore probably evolved both black skin and frizzy hair at about the same time–when we came out of the shady forests and began running around on the much sunnier savannahs. Frizzy hair seems well-adapted to cooling–by standing on end, it lets air flow between the follicles–and of course melanin is protective from the sun’s rays. (And apparently, many of the lighter-skinned Bushmen suffer from skin cancer.)
Steatopygia also comes to mind, though I don’t know if anyone has studied its origins.
According to Wikipedia, additional traits common to Sub-Saharan Africans include:
Modern cross-analysis of osteological variables and genome-wide SNPs has identified specific genes, which control this craniofacial development. Of these genes, DCHS2, RUNX2, GLI3, PAX1 and PAX3 were found to determine nasal morphology, whereas EDAR impacts chin protrusion. …
Ashley Montagu lists “neotenous structural traits in which…Negroids [generally] differ from Caucasoids… flattish nose, flat root of the nose, narrower ears, narrower joints, frontal skull eminences, later closure of premaxillarysutures, less hairy, longer eyelashes, [and] cruciform pattern of second and third molars.”
As hominids gradually lost their fur (between 4.5 and 2 million years ago) to allow for better cooling through sweating, their naked and lightly pigmented skin was exposed to sunlight. In the tropics, natural selection favoured dark-skinned human populations as high levels of skin pigmentation protected against the harmful effects of sunlight. Indigenous populations’ skin reflectance (the amount of sunlight the skin reflects) and the actual UV radiation in a particular geographic area is highly correlated, which supports this idea. Genetic evidence also supports this notion, demonstrating that around 1.2 million years ago there was a strong evolutionary pressure which acted on the development of dark skin pigmentation in early members of the genus Homo.…
About 7 million years ago human and chimpanzee lineages diverged, and between 4.5 and 2 million years ago early humans moved out of rainforests to the savannas of East Africa. They not only had to cope with more intense sunlight but had to develop a better cooling system. …
Skin colour is a polygenic trait, which means that several different genes are involved in determining a specific phenotype. …
Data collected from studies on MC1R gene has shown that there is a lack of diversity in dark-skinned African samples in the allele of the gene compared to non-African populations. This is remarkable given that the number of polymorphisms for almost all genes in the human gene pool is greater in African samples than in any other geographic region. So, while the MC1Rf gene does not significantly contribute to variation in skin colour around the world, the allele found in high levels in African populations probably protects against UV radiation and was probably important in the evolution of dark skin.
Skin colour seems to vary mostly due to variations in a number of genes of large effect as well as several other genes of small effect (TYR, TYRP1, OCA2, SLC45A2, SLC24A5, MC1R, KITLG and SLC24A4). This does not take into account the effects of epistasis, which would probably increase the number of related genes.Variations in the SLC24A5 gene account for 20–25% of the variation between dark and light skinned populations of Africa, and appear to have arisen as recently as within the last 10,000 years. The Ala111Thr or rs1426654 polymorphism in the coding region of the SLC24A5 gene reaches fixation in Europe, and is also common among populations in North Africa, the Horn of Africa, West Asia, Central Asia and South Asia.
That’s rather interesting about MC1R. It could imply that the difference in skin tone between SSAs and non-SSAs is due to active selection in Blacks for dark skin and relaxed selection in non-Blacks, rather than active selection for light skin in non-Blacks.
MC1R is one of the key proteins involved in regulating mammalianskin and hair color. …It works by controlling the type of melanin being produced, and its activation causes the melanocyte to switch from generating the yellow or red phaeomelanin by default to the brown or black eumelanin in replacement. …
This is consistent with active selection being necessary to produce dark skin, and relaxed selection producing lighter tones.
Studies show the MC1R Arg163Gln allele has a high frequency in East Asia and may be part of the evolution of light skin in East Asian populations. No evidence is known for positive selection of MC1R alleles in Europe and there is no evidence of an association between MC1R and the evolution of light skin in European populations. The lightening of skin color in Europeans and East Asians is an example of convergent evolution.
Dark-skinned people living in low sunlight environments have been recorded to be very susceptible to vitamin D deficiency due to reduced vitamin D synthesis. A dark-skinned person requires about six times as much UVB than lightly pigmented persons.
In Reconstructing Prehistoric African Population Structure, Skoglund et al assembled genetic data from 16 prehistoric Africans and compared them to DNA from nearby present-day Africans. They found:
The ancestors of the Bushmen (aka the San/KhoiSan) once occupied a much wider area.
They contributed about 2/3s of the ancestry of ancient Malawi hunter-gatherers (around 8,100-2,500 YA)
Contributed about 1/3 of the ancestry of ancient Tanzanian hunter-gatherers (around 1,400 YA)
Farmers (Bantus) spread from west Africa, completely replacing hunter-gatherers in some areas
Modern Malawians are almost entirely Bantu.
A Tanzanian pastoralist population from 3,100 YA spread out across east Africa and into southern Africa
Bushmen ancestry was not found in modern Hadza, even though they are hunter-gatherers and speak a click language like the Bushmen.
The Hadza more likely derive most of their ancestry from ancient Ethiopians
Modern Bantu-speakers in Kenya derive from a mix between western Africans and Nilotics around 800-400 years ago.
Middle Eastern (Levant) ancestry is found across eastern Africa from an admixture event that occurred around 3,000 YA, or around the same time as the Bronze Age Collapse.
A small amount of Iranian DNA arrived more recently in the Horn of Africa
Ancient Bushmen were more closely related to modern eastern Africans like the Dinka (Nilotics) and Hadza than to modern west Africans (Bantus),
This suggests either complex relationships between the groups or that some Bantus may have had ancestors from an unknown group of humans more ancient than the Bushmen.
Modern Bushmen have been evolving darker skins
Pygmies have been evolving shorter stature
I missed #12-13 on my previous post about this paper, though I did note that the more data we get on ancient African groups, the more likely I think we are to find ancient admixture events. If humans can mix with Neanderthals and Denisovans, then surely our ancestors could have mixed with Ergaster, Erectus, or whomever else was wandering around.
#15 is interesting, and consistent with the claim that Bushmen suffer from a lot of skin cancer–before the Bantu expansion, they lived in far more forgiving climates than the Kalahari desert. But since Bushmen are already lighter than their neighbors, this begs the question of how light their ancestors–who had no Levantine admixture–were. Could the Bantus’ and Nilotics’ darker skins have evolved after the Bushmen/everyone else split?
Meanwhile, in Loci Associated with Skin Pigmentation Identified in African Populations, Crawford et al used genetic samples from 1,570 people from across Africa to find six genetic areas–SLC24A5, MFSD12, DDB1, TMEM138, OCA2 and HERC2–which account for almost 30% of the local variation in skin color.
SLC24A5 is a light pigment introduced to east Africa from the Levant, probably around 3,000 years ago. Today, it is common in Ethiopia and Tanzania.
Interestingly, according to the article, “At all other loci, variants associated with dark pigmentation in Africans are identical by descent in southern Asian and Australo-Melanesian populations.”
These are the world’s other darkest peoples, such as the Jarawas of the Andaman Islands or the Melanesians of Bougainville, PNG. (And, I assume, some groups from India such as the Tamils.) This implies that these groups 1. had dark skin already when they left Africa, and 2. Never lost it on their way to their current homes. (If they had gotten lighter during their journey and then darkened again upon arrival, they likely would have different skin color variants than their African cousins.)
This implies that even if the Bushmen split off (around 200,000-300,000 YA) before dark skin evolved, it had evolved by the time people left Africa and headed toward Australia (around 100,000-70,000 YA.) This gives us a minimum threshold: it most likely evolved before 70,000 YA.
(But as always, we should be careful because perhaps there are even more skin color variant that we don’t know about yet in these populations.)
MFSD12 is common among Nilotics and is related to darker skin.
Further, the alleles associated with skin pigmentation at all loci but SLC24A5 are ancient, predating the origin of modern humans. The ancestral alleles at the majority of predicted causal SNPs are associated with light skin, raising the possibility that the ancestors of modern humans could have had relatively light skin color, as is observed in the San population today.
The full article is not out yet, so I still don’t know when all of these light and dark alleles emerged, but the order is absolutely intriguing. For now, it looks like this mystery will still have to wait.