When did Whites Evolve?

Defining exactly who is white is contentious and difficult–so I shan’t. If you want to debate among yourselves whether or not the Irish or Hindus count, that’s your own business.

Picture 1 Picture 2

Here’s Haak et al’s full graph of human genomes from around the world, (see here and here for various discussions.) The genomes on the far left are ancient European skeletons; everything from the “pink” section onward is modern. The “African” genomes all have bright blue at their bottoms; Asian (and American Indian) genomes all have yellow. The European countries tend to have a triple-color profile, reflecting their recent (evolutionarily speaking) mix of European hunter-gatherers (dark blue), Middle Eastern farmers (orange), and a “teal” group that came in with the Indo-European speakers, but whose origins we have yet to uncover:

Europe

Unsurprisingly, the Basque have less of this “teal.” Middle Easterners, as you can see, are quite similar genetically, but tend to have “purple” instead of “dark blue”

1024px-PSM_V52_D323_Global_hair_texture_mapPhysically, of course, whites’ most distinctive feature is pale skin. They are also unique among human clades in their variety of hair and eye colors, ranging from dark to light, and tend to have wavy hair that is “oval” in cross-section. (Africans tend to have curly hair that is flat in cross section, and Asians tend to have straight hair that is cylindrical in cross section. See map for more hair details.)

There are other traits–the Wikipedia page on “Caucasian race” (not exactly synonymous with “whites”) notes:

According to George W. Gill and other modern forensic anthropologists, physical traits of Caucasoid crania are generally distinct from those of the Mongoloid and Negroid races. They assert that they can identify a Caucasoid skull with an accuracy of up to 95% by the following features: [20][21][22][23][24]

  • An orthognathic profile, with minimal protrusion of the lower part of the face (little or no prognathism).
  • Retreating zygomatic bones (cheekbones), making the face look more “pointed”.
  • Narrow nasal aperture, with a tear-shaped nasal cavity (nasal fossa).

Bodyhair_map_according_to_American_Journal_of_Physical_Anthropology_and_other_sourcesBut I am not going to deal with any of these, unless I hear of something like the EDAR gene coding for a bunch of traits.

Old racial classifications made use of language groups as stand-ins for racial groups. This turns out to be not very reliable, as we’ve found that in many cases, a small group of conquerors has managed to impose its language without imposing its genetics, as you’ve discovered in real life if you’ve ever met an African or Indian who speaks English.

europe-hair0223--light-hThe first known modern humans in Europe (IE, not Neanderthals nor Homo Erectuses,) popularly known as Cro-Magnons and unpopularly known as European early modern humans, (because anthropologists hate being understood dislike sounding like commoners,) lived around 43,ooo-45,000 years ago in Italy. By 41,000 years ago, Cro-Magnons had reached the southern coast of England.

Humanity's path out of Africa
Humanity’s path out of Africa

(Incidentally, Mungo Man, found in south-east Australia, is also estimated to be about 40,000 years old, suggesting that either:

A. People took a much longer route from Africa to Europe than to Australia
B. Europe was difficult to enter when folks left Africa, possibly because of glaciers or Neanderthals
C. There were multiple Out-of-Africa events, or
D. Our knowledge is incomplete.

D is obviously true, and I favor C regardless of Mungo’s true age.)

source: Wikipedia
source: Wikipedia

These Cro-Magnons appear to have been brown skinned, brown eyed, and black haired–they likely looked more like their close relatives in the Middle East (whatever they looked like,) than their distant descendants in modern Europe. (Despite all of the mixing and conquering of the centuries, I think modern Europeans are partially descended from Cro-Magnons, but I could be wrong.)

The Cro-Magnons carved the famous “Venus of Willendorf” (we don’t really know if the figurine was intended as a “goddess” or a fertility figure or just a portrait of a local lady or what, but it’s a nice name,) among many other similar figurines, some of them quite stylized.

Venus of Monruz
Venus of Monruz
Venus of Willendorf
Venus of Willendorf
Venus of Brassempouy
Venus of Brassempouy

Some people think the figurines look African, with cornrows or peppercorn hair and steatopygia. Others suggest the figurines are wearing hats or braids, and point out that not all of them are fat or have large butts.

 

 

So when did Europeans acquire their modern appearances? Here’s what I’ve found so far:

Wikipedia states:

Variations in the KITL gene have been positively associated with about 20% of melanin concentration differences between African and non-African populations. One of the alleles of the gene has an 80% occurrence rate in Eurasian populations.[52][53] The ASIP gene has a 75–80% variation rate among Eurasian populations compared to 20–25% in African populations.[54] Variations in the SLC24A5 gene account for 20–25% of the variation between dark and light skinned populations of Africa,[55]and appear to have arisen as recently as within the last 10,000 years.[56] The Ala111Thr or rs1426654 polymorphism in the coding region of the SLC24A5 gene reaches fixation in Europe, but is found across the globe, particularly among populations in Northern Africa, the Horn of Africa, West Asia, Central Asia and South Asia.[57][58][59]

maps-europelighteyesThe Guardian reports:

According to a team of researchers from Copenhagen University, a single mutation which arose as recently as 6-10,000 years ago was responsible for all the blue-eyed people alive on Earth today.

The team, whose research is published in the journal Human Genetics, identified a single mutation in a gene called OCA2, which arose by chance somewhere around the northwest coasts of the Black Sea in one single individual, about 8,000 years ago.

Haplogroups_europeWikipedia again:

The hair color gene MC1R has at least seven variants in Europe giving the continent a wide range of hair and eye shades. Based on recent genetic research carried out at three Japanese universities, the date of the genetic mutation that resulted in blond hair in Europe has been isolated to about 11,000 years ago during the last ice age.[25]

Recent archaeological and genetic study published in 2014 found that, seven “Scandinavian hunter-gatherers” found in 7700-year-old Motala archaeological site in southern Sweden had both light skin gene variants, SLC24A5 and SLC45A2, they also had a third gene, HERC2/OCA2, which causes blue eyes and also contribute to lighter skin and blond hair.[29]

Genetic research published in 2014, 2015 and 2016 found that Yamnaya Proto-Indo-Europeans, who migrated to Europe in early bronze age were overwhelmingly dark-eyed (brown), dark-haired and had a skin colour that was moderately light, though somewhat darker than that of the average modern European.[49] While light pigmentation traits had already existed in pre-Indo-European Europeans (both farmers and hunter-gatherers) and long-standing philological attempts to correlate them with the arrival of Indo-Europeans from the steppes were misguided.[50]

According to genetic studies, Yamnaya Proto-Indo-European migration to Europe lead to Corded Ware culture, where Yamnaya Proto-Indo-Europeans mixed with “Scandinavian hunter-gatherer” women who carried genetic alleles HERC2/OCA2, which causes combination of blue eyes and blond hair.[51][52][53] Descendants of this “Corded Ware admixture”, split from Corded Ware culture in every direction forming new branches of Indo-European tree, notably Proto-Greeks, Proto-Italio-Celtic, Proto-Indo-Iranians and Proto-Anatolians.[54] Proto-Indo-Iranians who split from Corded ware culture, formed Andronovo culture and are believed to have spread genetic alleles HERC2/OCA2 that causes blonde hair to parts of West Asia, Central Asia and South Asia.[52]

Genetic analysis in 2014 also found that Afanasevo culture which flourished in Altai Mountains were genetically identical to Yamnaya Proto-Indo-Europeans and that they did not carry genetic alleles for blonde hair or light eyes.[55][51][52] Afanasevo culture was later replaced by second wave of Indo-European invaders from Andronovo culture, who were product of Corded Ware admixture that took place in Europe, and carried genetic alleles that causes blond hair and light eyes.[55][51][52]

Dienekes writes:

An interesting finding [in Ancient human genomes suggest three ancestral populations for present-day Europeans] is that the Luxembourg hunter-gatherer probably had blue eyes (like a Mesolithic La Brana Iberian, a paper on which seems to be in the works) but darker skin than the LBK farmer who had brown eyes but lighter skin. Raghavan et al. did not find light pigmentation in Mal’ta (but that was a very old sample), so with the exception of light eyes that seem established for Western European hunter-gatherers (and may have been “darker” in European steppe populations, but “lighter” in Bronze Age South Siberians?), the origin of depigmentation of many recent Europeans remains a mystery.

Beleza et al, in The Timing of Pigmentation Lightening in Europeans, write:

… we estimate that the onset of the sweep shared by Europeans and East Asians at KITLG occurred approximately 30,000 years ago, after the out-of-Africa migration, whereas the selective sweeps for the European-specific alleles at TYRP1, SLC24A5, and SLC45A2 started much later, within the last 11,000–19,000 years, well after the first migrations of modern humans into Europe.

And finally from Wikipedia:

In a 2015 study based on 230 ancient DNA samples, researchers traced the origins of several genetic adaptations found in Europe.[46] The original mesolithic hunter-gatherers were dark skinned and blue eyed.[46] The HERC2 and OCA2 variations for blue eyes are derived from the original mesolithic hunter-gatherers, and the genes were not found in the Yamna people.[46] The HERC2 variation for blue eyes first appears around 13,000 to 14,000 years ago in Italy and the Caucasus.[38]

The migration of the neolithic farmers into Europe brought along several new adaptations.[46] The variation for light skin color was introduced to Europe by the neolithic farmers.[46] After the arrival of the neolithic farmers, a SLC22A4 mutation was selected for, a mutation which probably arose to deal with ergothioneine deficiency but increases the risk of ulcerative colitis, celiac disease, and irritable bowel disease.

The genetic variations for lactose persistence and greater height came with the Yamna people.[46]

To sum:

Skin: 10,000 years, 11-19,000 years, possibly arriving after blue eyes

Blond hair: 11,000 years

Blue eyes: 6-10,000 years ago, 13,000 to 14,000 years ago

It looks like some of these traits emerged in different populations and later combined as they spread, but they all look like they arose at approximately the same time.
Obviously I have neglected red and brown hair, green and hazel eyes, but the genetics all seem to be related.

Why Geneticists get touchy about Epigenetics

Disclaimer: I am not a geneticist. For those of you who are new here, this is basically a genetics fan blog. I am trying to learn about genetics, and you know what?

Genetics is complicated.

I fully admit that here’s a lot of stuff that I don’t know yet, nor fully understand.

Luckily for me, there are a few genetics basics that are easy enough to understand that even a middle school student can master them:

  1. “Evolution” is the theory that species change over time due to some individuals within them being better at getting food, reproducing, etc., than other individuals, and thereby passing on their superior traits to their children.
  2. “Genes,” (or “DNA,”) are the biological code for all life, and the physical mechanism by which traits are passed down from parent to child.
  3. “Mendel squares” work for modeling the inheritance of simple traits
  4. More complicated trait are modeled with more complicated math
  5. Lamarckism doesn’t work.

Lamarck was a naturalist who, in the days before genes were discovered, theorized that creatures could pass on “acquired” characteristics. For example, an animal with a relatively normal neck in an area with tall trees might stretch its neck in order to reach the tastiest leaves, and then pass on this longer-neck to its children, who would also stretch their necks and then pass on the trait to their children, until you get giraffes.

A fellow with similar ideas, Lysenko, was a Soviet Scientist who thought he could make strains of cold-tolerant wheat simply by exposing wheat kernels to the cold.

We have the luxury of thinking that Lysenko’s ideas sound silly. The Soviet peasants had to actually try to grow his wheat, and scientists who pointed out that this was nonsense got sent to the gulag.

The problem with Lamarckism is that it doesn’t work. You can’t make wheat grow in Antarctica by sticking it in your freezer for a few months and animals don’t have taller babies just because you stretch their necks.

So what does this have to do with epigenetics?

Pop science articles talk about epigenetics as if it were Lamarckism. Through the magic of epigenetic markers, acquired traits can supposedly be passed down to one’s children and grandchildren, infinitely.

Actual epigenetics, as scientists actually study it, is a real and interesting field. But the effects of epigenetic changes are not so large and permanent as to substantially change most of the way we model genetic inheritance.

Why?

Epigenetics is, in essence, part of how you learn. Suppose you play a disturbing noise every time a mouse smells cherries. Pretty soon, the mouse would learn to associate “fear” and “cherry smell,” and according to Wikipedia, this gets encoded at the epigenetic level. Great, the mouse has learned to be afraid of cherries.

If these epigenetic traits get passed on to the mouse’s children–I am not convinced this is possible but let’s assume it is–then those children can inherit their mother’s fear of cherries.

This is pretty neat, but people take it too far when they assume that as a result, the mouse’s fear will persist over many generations, and that you have essentially just bred a new, cherry-fearing strain of mice.

You, see, you learn new things all the time. So do mice. Your epigenetics therefore keep changing throughout your life. The older you are, the more your epigenetics have changed since you were born. This is why even identical twins differ in small ways from each other. Sooner or later, the young mice will figure out that there isn’t actually any reason to be afraid of cherries, and they’ll stop being afraid.

If people were actually the multi-generational heirs of their ancestors’ trauma, pretty much everyone in the world would be affected, because we all have at least one ancestor who endured some kind of horrors in their life. The entire continent of Europe should be a PTSD basket case due to WWI, WWII, and the Depression.

Thankfully, this is not what we see.

Epigenetics has some real and very interesting effects, but it’s not Lamarckism 2.0.

The 6 Civilizations?

Picture 4

The first six civilizations–Mesopotamia, Egypt, Indus Valley (Harappa), Andes, China, and Mesoamerica– are supposed to have arisen independently of each other approximately 6,000 to 3,500 years ago.

ff23e2c73822050c646f06efd7503a4b

Of course, we can’t be absolutely sure they arose completely independently of each other–people from the Andes could have traveled to Mesoamerica and influenced people there, or people from Mesopotamia could have been in contact with people from the Indus Valley or Egypt. But these civilizations are thought to have probably arisen fairly independently of each other, as mostly spontaneous responses to local conditions.

I set out to research the big six because I realized that I know approximately nothing about the Indus Valley civilization, despite it actually being significantly older than the Chinese–for that matter, it turns out that Andean civilization is also older than China’s.

Wikipedia has an interesting definition of “civilization“:

Civilizations are intimately associated with and often further defined by other socio-politico-economic characteristics, including centralization, the domestication of both humans and other organisms, specialization of labor, culturally ingrained ideologies of progress and supremacism, monumental architecture, taxation, societal dependence upon farming as an agricultural practice, and expansionism.[2][3][5][7][8]

Read that carefully.

Early-Humans-Map-Domestication

(Sorry this map is too small to be really useful, but the next one one is better:)

Feature2originmap600

Interestingly, while Mesoamerica has corn and the Andes have beans, potatoes and peanuts, Egypt and Mesopotamia have… not a lot of locally domesticated crops.

It’s understandable how Chinese civilization, which got started much later, might have originally imported rice from further south. But if Egypt and Mesopotamia are the world’s first centers of agriculture, where did they get their wheat from?

Anyway, I have been reading about Gobekli Tepe, an archaeological site in the Southeastern Anatolia Region of modern-day Turkey, about 7 miles from Şanlıurfa, which radiocarbon dating suggests was constructed by 11,000 years ago:

Göbekli Tepe, Turkey
Göbekli Tepe, Turkey

[The site] includes two phases of ritual use dating back to the 10th – 8th millennium BCE. During the first phase, pre-pottery Neolithic A (PPNA), circles of massive T-shaped stone pillars were erected. More than 200 pillars in about 20 circles are currently known through geophysical surveys. Each pillar has a height of up to 6 m (20 ft) and a weight of up to 20 tons. They are fitted into sockets that were hewn out of the bedrock. …

All statements about the site must be considered preliminary, as less than 5% of the site has been excavated, … While the site formally belongs to the earliest Neolithic (PPNA), up to now no traces of domesticated plants or animals have been found. The inhabitants are assumed to have been hunters and gatherers who nevertheless lived in villages for at least part of the year.[27] …

The surviving structures, then, not only predate pottery, metallurgy, and the invention of writing or the wheel, they were built before the so-called Neolithic Revolution, i.e., the beginning of agriculture and animal husbandry around 9000 BCE.

Hewing enormous monoliths out of the rock and then hauling them uphill to form some sort of mysterious structure that doesn’t even appear to be a house takes a tremendous amount of work:

But the construction of Göbekli Tepe implies organization of an advanced order not hitherto associated with Paleolithic, PPNA, or PPNB societies. Archaeologists estimate that up to 500 persons were required to extract the heavy pillars from local quarries and move them 100–500 meters (330–1,640 ft) to the site.[28] The pillars weigh 10–20 metric tons (10–20 long tons; 11–22 short tons), with one still in the quarry weighing 50 tons.[29] It has been suggested that an elite class of religious leaders supervised the work and later controlled whatever ceremonies took place. If so, this would be the oldest known evidence for a priestly caste—much earlier than such social distinctions developed elsewhere in the Near East.[7]

Eastern Turkey (modern Kurdistan): the first civilization?

There are several other sites in the area, though not as old as Gobekli Tepe, such as Nevalı Çori.

AgriKurdistanSo where did domesticated wheat come from? Einkorn wheat’s closest wild relatives have been found in Karaca Dag, Turkey, about 20 miles away. Wild emmer wheat appears to be a hybrid between a wild Einkorn variety and a not-quite identified species and grows from Israel to Iran, though our first evidence of domestication come from Israel and Syria. (Of course, we may have excavated more archaeological sites in Israel than, say, Iraq or Turkey, for obvious recent geopolitical and religious reasons.)

 

Regardless, we know that these first Anatolian farmers made a huge impact on the European genetic landscape:

From Haak et al, rearranged by me
From Haak et al, rearranged by me

The guys on the left, the ones with “blue” DNA, are European hunter-gatherers who occupied the continent before farmers arrived. The guys in the middle, “orange,” are farmers. The farmers appear to have arrived initially in Europe around Starcevo (in the Balkans) and spread out from there, eventually conquering, overwheliming, or otherwise displacing the hunter-gatherers. (The teal-blue group is “Indo-Europeans” who lived out on the Asian steppe and so did not get conquered by farmers.) From Europedia.com:

European_hunter-gatherer_admixture Neolithic_farmer_admixture

 

Of course, people have been referring to the region from the mouths of the Tigris and Euphrates to the Nile valley as the “Fertile Crescent” for a hundred years, though the major differences of Egyptian and Sumerian civilization make it sensible to speak of them separately. But it looks to me that they may both owe their origins (at least their crops) to some highly-organized Turkish hunter-gatherers.

 

Southpaw Genetics

Warning: Totally speculative

This is an attempt at a coherent explanation for why left-handedness (and right-handedness) exist in the distributions that they do.

Handedness is a rather exceptional human trait. Most animals don’t have a dominant hand (or foot.) Horses have no dominant hooves; anteaters dig equally well with both paws; dolphins don’t favor one flipper over the other; monkeys don’t fall out of trees if they try to grab a branch with their left hands. Only humans have a really distinct tendency to use one side of their bodies over the other.

And about 90% of us use our right hands, and about 10% of us use our left hands, (Wikipedia claims 10%, but The Lopsided Ape reports 12%.) an observation that appears to hold pretty consistently throughout both time and culture, so long as we aren’t dealing with a culture where lefties are forced to write with their right hands.

A simple Mendel-square two-gene explanation for handedness–a dominant allele for right-handedness and a recessive one for left-handedness, with equal proportions of alleles in society, would result in a 75% righties to 25% lefties. Even if the proportions weren’t equal, the offspring of two lefties ought to be 100% left-handed. This is not, however, what we see. The children of two lefties have only a 25% chance or so of being left-handed themselves.

So let’s try a more complicated model.

Let’s assume that there are two alleles that code for right-handedness. (Hereafter “R”) You get one from your mom and one from your dad.

Each of these alleles is accompanied by a second allele that codes for either nothing (hereafter “O”) or potentially switches the expression of your handedness (hereafter “S”)

Everybody in the world gets two identical R alleles, one from mom and one from dad.

Everyone also gets two S or O alleles, one from mom and one from dad. One of these S or O alleles affects one of your Rs, and the other affects the other R.

Your potential pairs, then, are:

RO/RO, RO/RS, RS/RO, or RS/RS

RO=right handed allele.

RS=50% chance of expressing for right or left dominance; RS/RS thus => 25% chance of both alleles coming out lefty.

So RO/RO, RO/RS, and RS/RO = righties, (but the RO/ROs may have especially dominant right hands; half of the RO/RS guys may have weakly dominant right hands.)

Only RS/RS produces lefties, and of those, only 25% defeat the dominance odds.

This gets us our observed correlation of only 25% of children of left-handed couples being left-handed themselves.

(Please note that this is still a very simplified model; Wikipedia claims that there may be more than 40 alleles involved.)

What of the general population as a whole?

Assuming random mating in a population with equal quantities of RO/RO, RO/RS, RS/RO and RS/RS, we’d end up with 25% of children RS/RS. But if only 25% of RS/RS turn out lefties, only 6.25% of children would be lefties. We’re still missing 4-6% of the population.

This implies that either: A. Wikipedia has the wrong #s for % of children of lefties who are left-handed; B. about half of lefties are RO/RS (about 1/8th of the RO/RS population); C. RS is found in twice the proportion as RO in the population; or D. my model is wrong.

According to Anything Left-Handed:

Dr Chris McManus reported in his book Right Hand, Left Hand on a study he had done based on a review of scientific literature which showed parent handedness for 70,000 children. On average, the chances of two right-handed parents having a left-handed child were around 9% left-handed children, two left-handed parents around 26% and one left and one right-handed parent around 19%. …
More than 50% of left-handers do not know of any other left-hander anywhere in their living family.

This implies B, that about half of lefties are RO/RS. Having one RS combination gives you a 12.5% chance of being left-handed; having two RS combinations gives you a 25% chance.

And that… I think that works. And it means we can refine our theory–we don’t need two R alleles; we only need one. (Obviously it is more likely a whole bunch of alleles that code for a whole system, but since they act together, we can model them as one.) The R allele is then modified by a pair of alleles that comes in either O (do nothing,) or S (switch.)

One S allele gives you a 12.5% chance of being a lefty; two doubles your chances to 25%.

Interestingly, this model suggests that not only does no gene for “left handedness” exist, but that “left handedness” might not even be the allele’s goal. Despite the rarity of lefties, the S allele is found in 75% of the population (an equal % as the O allele.) My suspicion is that the S allele is doing something else valuable, like making sure we don’t become too lopsided in our abilities or try to shunt all of our mental functions to one side of our brain.

Is there a correlation between intelligence and taste?

(I am annoyed by the lack of bands between 1200 and 1350)
(source)

De gustibus non disputandum est. — Confucius

We’re talking about foods, not whether you prefer Beethoven or Lil’ Wayne.

Certainly there are broad correlations between the foods people enjoy and their ethnicity/social class. If you know whether I chose fried okra, chicken feet, gefilte fish, escargot, or grasshoppers for dinner, you can make a pretty good guess about my background. (Actually, I have eaten all of these things. The grasshoppers were over-salted, but otherwise fine.) The world’s plethora of tasty (and not-so-tasty) cuisines is due primarily to regional variations in what grows well where (not a lot of chili peppers growing up in Nunavut, Canada,) and cost (the rich can always afford fancier fare than the poor,) with a side dish of seemingly random cultural taboos like “don’t eat pork” or “don’t eat cows” or “don’t eat grasshoppers.”

But do people vary in their experience of taste? Does intelligence influence how you perceive your meal, driving smarter (or less-smart) people to seek out particular flavor profiles or combinations? Or could there be other psychological or neurological factors at play n people’s eating decisions?

This post was inspired by a meal my husband, an older relative and I shared recently at McDonald’s. It had been a while since we’d last patronized McDonald’s, but older relative likes their burgers, so we went and ordered some new-to-us variety of meat-on-a-bun. As my husband and I sat there, deconstructing the novel taste experience and comparing it to other burgers, the older relative gave us this look of “Jeez, the idiots are discussing the flavor of a burger! Just eat it already!”

As we dined later that evening at my nemesis, Olive Garden, I began wondering whether we actually experienced the food the same way. Perhaps there is something in people that makes them prefer bland, predictable food. Perhaps some people are better at discerning different flavors, and the people who cannot discern them end up with worse food because they can’t tell?

Unfortunately, it appears that not a lot of people have studied whether there is any sort of correlation between IQ and taste (or smell.) There’s a fair amount of research on taste (and smell,) like “do relatives of schizophrenics have impaired senses of smell?” (More on Schizophrenics and their decreased ability to smell) or “can we get fat kids to eat more vegetables?” Oh, and apparently the nature of auditory hallucinations in epileptics varies with IQ (IIRC.) But not much that directly addresses the question.

I did find two references that, somewhat in passing, noted that they found no relationship between taste and IQ, but these weren’t studies designed to test for that. For example, in A Food Study of Monotony, published in 1958 (you know I am really looking for sources when I have to go back to 1958,) researchers restricted the diets of military personnel employed at an army hospital to only 4 menus to see how quickly and badly they’d get bored of the food. They found no correlation between boredom and IQ, but people employed at an army hospital are probably pre-selected for being pretty bright (and having certain personality traits in common, including ability to stand army food.)

Interestingly, three traits did correlate with (or against) boredom:

Fatter people got bored fastest (the authors speculate that they care the most about their food,) while depressed and feminine men (all subjects in the study were men) got bored the least. Depressed people are already disinterested in food, so it is hard to get less-interested, but no explanation was given of what they meant by “femininity” or how this might affect food preferences. (Also, the hypochondriacs got bored quickly.)

Some foods inspire boredom (or even disgust) quickly, while others are virtually immune. Milk and bread, for example, can be eaten every day without complaint (though you might get bored if bread were your only food.) Potted meat, by contrast, gets old fast.

Likewise, Personality Traits and Eating Habits (warning PDF) notes that:

Although self-reported eating practices were not associated with educational level, intelligence, nor various indices of psychopathology, they were related to the demographic variables of gender and age: older participants reported eating more fiber in their diets than did younger ones, and women reported more avoidance of fats from meats than did men.

Self-reported eating habits may not be all that reliable, though.

Autistic children do seem to be worse at distinguishing flavors (and smells) than non-autistic children, eg Olfaction and Taste Processing in Autism:

Participants with autism were significantly less accurate than control participants in identifying sour tastes and marginally less accurate for bitter tastes, but they were not different in identifying sweet and salty stimuli. … Olfactory identification was significantly worse among participants with autism. … True differences exist in taste and olfactory identification in autism. Impairment in taste identification with normal detection thresholds suggests cortical, rather than brainstem dysfunction.

(Another study of the eating habits of autistic kids found that the pickier ones were rated by their parents as more severely impaired than the less picky ones, but then severe food aversions are a form of life impairment. By the way, do not tell the parents of an autistic kid, “oh, he’ll eat when he’s hungry.” They will probably respond politely, but mentally they are stabbing you.)

On brainstem vs. cortical function–it appears that we do some of our basic flavor identification way down in the most instinctual part of the brain, as Facial Expressions in Response to Taste and Smell Stimulation explores. The authors found that pretty much everyone makes the same faces in response to sweet, sour, and bitter flavors–whites and blacks, old people and newborns, retarded people and blind people, even premature infants, blind infants, and infants born missing most of their brains. All of which is another point in favor of my theory that disgust is real. (And if that is not enough science of taste for you, I recommend Place and Taste Aversion Learning, in which animals with brain lesions lost their fear of new foods.)

Genetics obviously plays a role in taste. If you are one of the 14% or so of people who think cilantro tastes like soap (and I sympathize, because cilantro definitely tastes like soap,) then you’ve already discovered this in a very practical way. Genetics also obviously determine whether you continue producing the enzyme for milk digestion after infancy (lactase persistence). According to Why are you a picky eater? Blame genes, brains, and breastmilk:

In many cases, mom and dad have only themselves to blame for unwittingly passing on the genes that can govern finicky tastes. Studies show that genes play a major role in determining who becomes a picky eater, including recent research on a group of 4- to 7-year-old twins. Part of the pickiness can be attributed to specific genes that govern taste. Variants of the TAS2R38 gene, for example, have been found to encode for taste receptors that determine how strongly someone tastes bitter flavors.

Researchers at Philadelphia’s Monell Chemical Senses Center, a scientific institute dedicated to the study of smell and taste, have found that this same gene also predicts the strength of sweet-tooth cravings among children. Kids who were more sensitive to bitterness preferred sugary foods and drinks. However, adults with the bitter receptor genes remained picky about bitter foods but did not prefer more sweets, the Monell study found. This suggests that sometimes age and experience can override genetics.

I suspect that there is actually a sound biological, evolutionary reason why kids crave sweets more than grownups, and this desire for sweets is somewhat “turned off” as we age.

Picture 10

From a review of Why some like it hot: Food, Genetics, and Cultural Diversity:

Ethnobotanist Gary Paul Nabhan suggests that diet had a key role in human evolution, specifically, that human genetic diversity is predominately a product of regional differences in ancestral diets. Chemical compounds found within animals and plants varied depending on climate. These compounds induced changes in gene expression, which can vary depending on the amount within the particular food and its availability. The Agricultural Age led to further diet-based genetic diversity. Cultivation of foods led to the development of novel plants and animals that were not available in the ancestral environment. …

There are other fascinating examples of gene-diet interaction. Culturally specific recipes, semi-quantitative blending of locally available foods and herbs, and cooking directions needed in order to reduce toxins present in plants, emerged over time through a process of trial-and error and were transmitted through the ages. The effects on genes by foods can be extremely complex given the range of plant-derived compounds available within a given region. The advent of agriculture is suggested to have overridden natural selection by random changes in the environment. The results of human-driven selection can be highly unexpected. …

In sedentary herding societies, drinking water was frequently contaminated by livestock waste. The author suggests in order to avoid contaminated water, beverages made with fermented grains or fruit were drunk instead. Thus, alcohol resistance was selected for in populations that herded animals, such as Europeans. By contrast, those groups which did not practice herding, such as East Asians and Native Americans, did not need to utilize alcohol as a water substitute and are highly sensitive to the effects of alcohol.

Speaking of genetics:

(source?)
From Eating Green could be in your Genes

Indians and Africans are much more likely than Europeans and native South Americans to have an allele that lets them eat a vegetarian diet:

The vegetarian allele evolved in populations that have eaten a plant-based diet over hundreds of generations. The adaptation allows these people to efficiently process omega-3 and omega-6 fatty acids and convert them into compounds essential for early brain development and controlling inflammation. In populations that live on plant-based diets, this genetic variation provided an advantage and was positively selected in those groups.

In Inuit populations of Greenland, the researchers uncovered that a previously identified adaptation is opposite to the one found in long-standing vegetarian populations: While the vegetarian allele has an insertion of 22 bases (a base is a building block of DNA) within the gene, this insertion was found to be deleted in the seafood allele.

Of course, this sort of thing inspires a wealth of pop-psych investigations like Dr. Hirsch’s What Flavor is your Personality?  (from a review:

Dr. Hirsh, neurological director of the Smell and Taste Research and Treatment Foundation in Chicago, stands by his book that is based on over 24 years of scientific study and tests on more than 18,000 people’s food choices and personalities.)

that nonetheless may have some basis in fact, eg: Personality may predict if you like spicy foods:

Byrnes assessed the group using the Arnett Inventory of Sensation Seeking (AISS), a test for the personality trait of sensation-seeking, defined as desiring novel and intense stimulation and presumed to contribute to risk preferences. Those in the group who score above the mean AISS score are considered more open to risks and new experiences, while those scoring below the mean are considered less open to those things.

The subjects were given 25 micrometers of capsaicin, the active component of chili peppers, and asked to rate how much they liked a spicy meal as the burn from the capsaicin increased in intensity. Those in the group who fell below the mean AISS rapidly disliked the meal as the burn increased. People who were above the mean AISS had a consistently high liking of the meal even as the burn increased. Those in the mean group liked the meal less as the burn increased, but not nearly as rapidly as those below the mean.

And then there are the roughly 25% of us who are “supertasters“:

A supertaster is a person who experiences the sense of taste with far greater intensity than average. Women are more likely to be supertasters, as are those from Asia, South America and Africa.[1] The cause of this heightened response is unknown, although it is thought to be related to the presence of the TAS2R38 gene, the ability to taste PROP and PTC, and at least in part, due to an increased number of fungiform papillae.[2]

Perhaps the global distribution of supertasters is related to the distribution of vegetarian-friendly alleles. It’s not surprising that women are more likely to be supertasters, as they have a better sense of smell than men. What may be surprising is that supertasters tend not to be foodies who delight in flavoring their foods with all sorts of new spices, but instead tend toward more restricted, bland diets. Because their sense of taste is essentially on overdrive, flavors that taste “mild” to most people taste “overwhelming” on their tongues. As a result, they tend to prefer a much more subdued palette–which is, of course, perfectly tasty to them.

Picture 8A French study, Changes in Food Preferences and Food Neophobia during a Weight Reduction Session, measured kids’ ability to taste flavors, then the rate at which they became accustomed to new foods. The more sensitive the kids were to flavors, the less likely they were to adopt a new food; the less adept they were at tasting flavors, the more likely they were to start eating vegetables.

Speaking of pickiness again:

“During research back in the 1980s, we discovered that people are more reluctant to try new foods of animal origin than those of plant origin,” Pelchat says. “That’s ironic in two ways. As far as taste is concerned, the range of flavors in animal meat isn’t that large compared to plants, so there isn’t as much of a difference. And, of course, people are much more likely to be poisoned by eating plants than by animals, as long as the meat is properly cooked.” …

It’s also possible that reward mechanisms in our brain can drive changes in taste. Pelchat’s team once had test subjects sample tiny bits of unfamiliar food with no substantial nutritional value, and accompanied them with pills that contained either nothing or a potent cocktail of caloric sugar and fat. Subjects had no idea what was in the pills they swallowed. They learned to like the unfamiliar flavors more quickly when they were paired with a big caloric impact—suggesting that body and brain combined can alter tastes more easily when unappetizing foods deliver big benefits.

So trying to get people to adopt new foods while losing weight may not be the best idea.

(For all that people complain about kids’ pickiness, parents are much pickier. Kids will happily eat playdoh and crayons, but one stray chicken heart in your parents’ soup and suddenly it’s “no more eating at your house.”)

Of course, you can’t talk about food without encountering meddlers who are convinced that people should eat whatever they’re convinced is the perfect diet, like these probably well-meaning folks trying to get Latinos to eat fewer snacks:

Latinos are the largest racial and ethnic minority group in the United States and bear a disproportionate burden of obesity related chronic disease. Despite national efforts to improve dietary habits and prevent obesity among Latinos, obesity rates remain high. …

there is a need for more targeted health promotion and nutrition education efforts on the risks associated with soda and energy-dense food consumption to help improve dietary habits and obesity levels in low-income Latino communities.

Never mind that Latinos are one of the healthiest groups in the country, with longer life expectancies than whites! We’d better make sure they know that their food ways are not approved of!

I have been saving this graph for just such an occasion.
Only now I feel bad because I forgot to write down who made this graph so I can properly credit them. If you know, please tell me!

(Just in case it is not clear already: different people are adapted to and will be healthy on different diets. There is no magical, one-size-fits-all diet.)

And finally, to bring this full circle, it’s hard to miss the folks claiming that Kids Who Eat Fast Food Have Lower IQs:

4,000 Scottish children aged 3-5 years old were examined to compare the intelligence dampening effects of fast food consumption versus  “from scratch”  fare prepared with only fresh ingredients.

Higher fast food consumption by the children was linked with lower intelligence and this was even after adjustments for wealth and social status were taken into account.

It’d be better if they controlled for parental IQ.

The conclusions of this study confirm previous research which shows long lasting effects on IQ from a child’s diet. An Australian study from the University of Adelaide published in August 2012 showed that toddlers who consume junk food grow less smart as they get older. In that study, 7000 children were examined at the age of 6 months, 15 months, 2 years to examine their diet.

When the children were examined again at age 8, children who were consuming the most unhealthy food had IQs up to 2 points lower than children eating a wholesome diet.

 

 

The hominin braid

Much has been said ’round the HBD-osphere, lately, on the age of the Pygmy (and Bushmen?)/everyone else split. Greg Cochran of West Hunter, for example, supports a split around 300,000 years ago–100,000 years before the supposed emergence of “anatomically modern humans” aka AMH aka Homo sapiens sapiens:

A number of varieties of Homo are grouped into the broad category of archaic humans in the period beginning 500,000 years ago (or 500ka). It typically includes Homo neanderthalensis (40ka-300ka), Homo rhodesiensis (125ka-300ka), Homo heidelbergensis (200ka-600ka), and may also include Homo antecessor (800ka-1200ka).[1] This category is contrasted with anatomically modern humans, which include Homo sapiens sapiens and Homo sapiens idaltu. (source)

According to genetic and fossil evidence, archaic Homo sapiens evolved to anatomically modern humans solely in Africa, between 200,000 and 100,000 years ago, with members of one branch leaving Africa by 60,000 years ago and over time replacing earlier human populations such as Neanderthals and Homo erectus. (source)

The last steps taken by the anatomically modern humans before becoming the current Homo sapiens, known as “behaviourally modern humans“, were taken either abruptly circa 40-50,000 years ago,[11] or gradually, and led to the achievement of a suite of behavioral and cognitive traits that distinguishes us from merely anatomically modern humans, hominins, and other primates. (source)

Cochran argues:

They’ve managed to sequence a bit of autosomal DNA from the Atapuerca skeletons, about 430,000 years old, confirming that they are on the Neanderthal branch.

Among other things, this supports the slow mutation rate, one compatible with what we see in modern family trios, but also with the fossil record.

This means that the Pygmies, and probably the Bushmen also, split off from the rest of the human race about 300,000 years ago. Call them Paleoafricans.

Personally, I don’t think the Pygmies are that old. Why? Call it intuition; it just seems more likely that they aren’t. Of course, there are a lot of guys out there whose intuition told them those rocks couldn’t possibly be more than 6,000 years old; I recognize that intuition isn’t always a great guide. It’s just the one I’ve got.

Picture 1( <– Actually, my intuition is based partially on my potentially flawed understanding of Haak’s graph, which I read as indicating that Pygmies split off quite recently.)

The thing about speciation (especially of extinct species we know only from their bones) is that it is not really as exact as we’d like it to be. A lot of people think the standard is “can these animals interbreed?” but dogs, coyotes, and wolves can all interbreed. Humans and Neanderthals interbred; the African forest elephant and African bush elephant were long thought to be the same species because they interbreed in zoos, but have been re-categorized into separate species because in the wild, their ranges don’t overlap and so they wouldn’t interbreed without humans moving them around. And now they’re telling us that the Brontosaurus was a dinosaur after all, but Pluto still isn’t a planet.

This is a tree
This is a tree

The distinction between archaic homo sapiens and homo sapiens sapiens is based partly on morphology (look at those brow ridges!) and partly on the urge to draw a line somewhere. If HSS could interbreed with Neanderthals, from whom they were separated by a good 500,000 years, there’s no doubt we moderns could interbreed with AHS from 200,000 years ago. (There’d be a fertility hit, just as pairings between disparate groups of modern HSS take fertility hits, but probably nothing too major–probably not as bad as an Rh- woman x Rh+ man, which we consider normal.)

bones sported by time
bones sported by time

So I don’t think Cochran is being unreasonable. It’s just not what my gut instinct tells me. I’ll be happy to admit I was wrong if I am.

The dominant model of human (and other) evolution has long been the tree (just as we model our own families.) Trees are easy to draw and easy to understand. The only drawback is that it’s not always clear exactly clear where a particular skull should be placed on our trees (or if the skull we have is even representative of their species–the first Neanderthal bones we uncovered actually hailed from an individual who had suffered from arthritis, resulting in decades of misunderstanding of Neanderthal morphology. (Consider, for sympathy, the difficulties of an alien anthropologist if they were handed a modern pygmy skeleton, 4’11”, and a Dinka skeleton, 5’11”, and asked to sort them by species.)

blob chart
blob chart

What we really have are a bunch of bones, and we try to sort them out by time and place, and see if we can figure out which ones belong to separate species. We do our best given what we have, but it’d be easier if we had a few thousand more ancient hominin bones.

The fact that different “species” can interbreed complicates the tree model, because branches do not normally split off and then fuse with other branches, at least not on real trees. These days, it’s looking more like a lattice model–but this probably overstates the amount of crossing. Aboriginal Australians, for example, were almost completely isolated for about 40,000 years, with (IIRC) only one known instance of genetic introgression that happened about 11,000 years ago when some folks from India washed up on the northern shore. The Native Americans haven’t been as isolated, because there appear to have been multiple waves of people that crossed the Bering Strait or otherwise made it into the Americas, but we are still probably talking about only a handful of groups over the course of 40,000 years.

Trellis model
Trellis model

Still, the mixing is there; as our ability to suss out genetic differences become better, we’re likely to keep turning up new incidences.

So what happens when we get deep into the 200,000 year origins of humanity? I suspect–though I could be completely wrong!–that things near the origins get murkier, not less. The tree model suggests that the original group hominins at the base of the “human” tree would be less genetically diverse than than the scattered spectrum of humanity we have today, but these folks may have had a great deal of genetic diversity among themselves due to having recently mated with other human species (many of which we haven’t even found, yet.) And those species themselves had crossed with other species. For example, we know that Melanesians have a decent chunk of Denisovan DNA (and almost no one outside of Melanesia has this, with a few exceptions,) and the Denisovans show evidence that they had even older DNA introgressed from a previous hominin species they had mated with. So you can imagine the many layers of introgression you could get with a part Melanesian person with some Denisovan with some of this other DNA… As we look back in time toward our own origins, we may see similarly a great variety of very disparate DNA that has, in essence, hitch-hiked down the years from older species, but has nothing to do with the timing of the split of modern groups.

As always, I am speculating.

Do small families lead to higher IQ?

Okay, so this is just me thinking (and mathing) out loud. Suppose we have two different groups (A and B) of 100 people each (arbitrary number chosen for ease of dividing.) In Group A, people are lumped into 5 large “clans” of 20 people each. In Group B, people are lumped in 20 small clans of 5 people each.

Each society has an average IQ of 100–ten people with 80IQs, ten people with 120IQs, and eighty people with 100IQs. I assume that there is slight but not absolute assortative mating, so that most high-IQ and low-IQ people end up marrying someone average.

IQ pairings:

100/100    100/80    100/120    80/80    120/120 (IQ)

30                 9                9                 1               1            (couples)

Okay, so there should be thirty couples where both partners have 100IQs, nine 100/80IQ couples, nine 100/120IQ couples, one 80/80IQ couple, and one 120/120IQ couple.

If each couple has 2 kids, distributed thusly:

100/100=> 10% 80, 10% 120, and 80% 100

120/120=> 100% 120

80/80 => 100% 80

120/100=> 100% 110

80/100 => 100% 90

Then we’ll end up with eight 80IQ kids, eighteen 90IQ, forty-eight 100IQ, eighteen 110 IQ, and 8 120IQ.

So, under pretty much perfect and totally arbitrary conditions that probably only vaguely approximate how genetics actually works (also, we are ignoring the influence of random chance on the grounds that it is random and therefore evens out over the long-term,) our population approaches a normal bell-curved IQ distribution.

Third gen:

80/80  80/90  80/100  90/90  90/100  90/110  100/100  100/110  100/120  110/110  110/120  120/120

1             2            5             4            9             2              6                9               5              4             2             1

2 80         4 85      10 90      8 90     18 95      4 100       1,4,1       18 105     10 110        8 110       4 115        2 120

3 80, 4 85, 18 90, 18 95, 8 100, 18 105, 18 110, 4 115, and 3 120. For simplicity’s sake:

7 80IQ, 18 90IQ, 44 100IQ, 18 110IQ, and 7 120IQ.

Not bad for a very, very rough model that is trying to keep the math very simple so I can write it blog post window instead of paper, though clearly 6 children have gotten lost somewhere. (rounding error???)

Anyway, now let’s assume that we don’t have a 2-child policy in place, but that being smart (or dumb) does something to your reproductive chances.

In the simplest model, people with 80IQs have zero children, 90s have one child, 100s have 2 children, 110s have 3 children, and 120s have 4 children.

oh god but the couples are crossed so do I take the average or the top IQ? I guess I’ll take average.

Gen 2:

100/100    100/80    100/120    80/80    120/120 (IQ)

30                 9                9                 1               1            (couples)

60 kids        9 kids       27 kids       0              4 kids

6, 48, 6

So our new distribution is six 80IQ, nine 90IQ, forty-eight 100IQ, twenty-seven 110IQ, and ten 120IQ.

(checks math oh good it adds up to 100.)

We’re not going to run gen three, as obviously the trend will continue.

Let’s go back to our original clans. Society A has 5 clans of 20 people each; Society B has 20 clans of 5 people each.

With 10 high-IQ and 10 low-IQ people per society, each clan in A is likely to have 2 smart and 2 dumb people. Each clan in B, by contrast, is likely to have only 1 smart or 1 dumb person. For our model, each clan will be the reproductive unit rather than each couple, and we’ll take the average IQ of each clan.

Society A: 5 clans with average of 100 IQ => social stasis.

Society B: 20 clans, 10 with average of 96, 10 with average of 106. Not a big difference, but if the 106s have even just a few more children over the generations than the 96s, they will gradually increase as a % of the population.

Of course, over the generations, a few of our 5-person clans will get two smart people (average IQ 108), a dumb and a smart (average 100), and two dumb (92.) The 108 clans will do very well for themselves, and the 92 clans will do very badly.

Speculative conclusions:

If society functions so that smart people have more offspring than dumb people (definitely not a given in the real world,) then: In society A, everyone benefits from the smart people, whose brains uplift their entire extended families (large clans.) This helps everyone, especially the least capable, who otherwise could not have provided for themselves. However, the average IQ in society A doesn’t move much, because you are likely to have equal numbers of dumb and smart people in each family, balancing each other out. In Society B, the smart people are still helping their families, but since their families are smaller, random chance dictates that they are less likely to have a dumb person in their families. The families with the misfortune to have a dumb member suffer and have fewer children as a result; the families with the good fortune to have a smart member benefit and have more children as a result. Society B has more suffering, but also evolves to have a higher average IQ. Society A has less suffering, but its IQ does not change. Obviously this a thought experiment and should not be taken as proof of anything about real world genetics. But my suspicion is that this is basically the mechanism behind the evolution of high-IQ in areas with long histories of nuclear, atomized families, and the mechanism suppressing IQ in areas with strongly tribal norms. (See HBD Chick for everything family structure related.)

 

 

Updated Tentative map of Neanderthal DNA

Picture 1

Based on my previous tentative map of archaic DNA, plus recent findings, eg Cousins of Neanderthals left DNA in Africa, Scientists Report. As usual, let me emphasize that this is VERY TENTATIVE.

Basically: Everyone outside of Africa has some Neanderthal DNA. It looks like the ancestors of the Melanesians interbred once with Neanderthals; the ancestors of Europeans interbred twice; the ancestors of Asians interbred three times.

Small amounts of Neanderthal DNA also show up in Africa, probably due to back-migration of people from Eurasia.

Denisovan DNA shows up mainly in Melanesians, but I think there is also a very small amount that shows up in south east Asia, some (or something similar) in Tibetans, and possibly a small amount in the Brazilian rainforest.

Now some kind of other archaic DNA has been detected in the Hazda, Sandawe, and Pygmies of Africa.

Native Americans and Neanderthal DNA

Since “Do Native Americans have Neanderthal DNA?” (or something similar) is the most popular search that leads people to my blog, I have begun to suspect that a clarification is in order.

Native Americans (Indians) are not Neanderthals. They are not half or quarter or otherwise significantly Neanderthal. If they were, they would have very noticeable fertility problems in mixed-race relationships.

They may have slightly higher than average Neanderthal admixture than other groups, but that is extremely speculative I don’t know of any scientists who have said so. We’re talking here about quite small amounts, like 0.5%, most of which appears to code for things like immune response and possibly some adaptations for handling long, cold winters. None of this appears to code for physical traits like skull shape, which have been under different selective pressures over the past 40,000 years.

As much as I would love to discover a group with significant Neanderthal DNA, that’s just not something we’ve found in anyone alive today.

Sorry, guys.

I’m probably wrong!

When trying to learn and understand approximately everything, one is forced to periodically admit that there are a great many things one does not yet know.

I made a diagram of my thoughts from yesterday:

humantreebasedonHaakMy intuition tells me this is wrong.

Abbreviations: SSA =  Sub-Saharan Africa; ANE = Ancient North Eurasian, even though they’re found all over the place; WHG = European hunter-gatherers; I-Es = Indo-Europeans.

I tried my best to make it neat and clear, focusing on the big separations and leaving out the frequent cross-mixing. Where several groups had similar DNA, I used one group to represent the group (eg, Yoruba,) and left out groups whose histories were just too complicated to express clearly at this size. A big chunk of the Middle East/middle of Eurasia is a mixing zone where lots of groups seem to have merged. (Likewise, I obviously left out groups that weren’t in Haak’s dataset, like Polynesians.)

I tried to arrange the groups sensibly, so that ones that are geographically near each other and/or have intermixed are near each other on the graph, but this didn’t always work out–eg, the Inuit share some DNA with other Native American groups, but ended up sandwiched between India and Siberia.

Things get complicated around the emergence of the Indo-Europeans (I-Es), who emerged from the combination of a known population (WHG) and an unknown population that I’m super-speculating might have come from India, after which some of the I-Es might have returned to India. But then there is the mystery of why the color on the graph changes from light green to teal–did another group related to the original IEs emerge, or is this just change over time?

The IEs are also, IMO, at the wrong spot in time (so are the Pygmies.) Maybe this is just a really bad proxy for time? Maybe getting conquered makes groups combine in ways that look like they differentiated at times other than when they did?

Either way, I am, well, frustrated.

EDIT: Oh, I just realized something I did wrong.

*Fiddles*

Still speculative, but hopefully better
Still speculative, but hopefully better

Among other things, I realized I’d messed up counting off where some of the groups split, so while I fixing that, I went ahead and switched the Siberians and Melanesians so I could get the Inuit near the other Americans.

I also realized that I was trying to smush together the emergence of the WHG and the Yamnaya, even though those events happened at different times. The new version shows the WHG and Yamnaya (proto-Indo-Europeans) at two very different times.

Third, I have fixed it so that the ANE don’t feed directly into modern Europeans. The downside of the current model is that it makes it look like the ANE disappeaed, when really they just dispersed into so many groups which mixed in turn with other groups that they ceased existing in “pure” form, though the Bedouins, I suspect, come closest.

The “light green” and “teal” colors on Haak’s graph are still problematic–light green doesn’t exist in “pure” form anywhere on the graph, but it appears to be highest in India. My interpretation is that the light green derived early on from an ANE population somewhere around India (though Iran, Pakistan, the Caucuses, or the Steppes are also possibilities,) and somewhat later mixed with an “East” population in India. A bit of that light green population also made it into the Onge, and later, I think a branch of it combined with the WHG to create the Yamnaya. (Who, in turn, conquered some ANE groups, creating the modern Europeans.)

I should also note that I might have the Khoi and San groups backwards, because I’m not all that familiar with them.

I could edit this post and just eliminate my embarrassing mistakes, but I think I’ll let them stay in order to show the importance of paying attention to the nagging sense of being wrong. It turns out I was! I might still be wrong, but hopefully I’m less wrong.