“Heritable” (or “heritability”) has a specific and unfortunately non-obvious definition in genetics.
The word sounds like a synonym for “inheritable,” rather like your grandmother’s collection of musical clocks. Musical clocks are inheritable; fruit, since it rots, is not very inheritable.
This is not what “heritable” means.
“Heritability,” in genetics, is a measure of the percent of phenotypic variation within a population that can be attributed to genetics.
Let me clarify that in normal speak. “Phenotype” is something you can actually see about an organism, like how tall it is or the nest it builds. “Phenotypic variation” means things like “variation in height” or “variation in nest size.”
Let’s suppose we have two varieties of corn: a giant strain and a dwarf strain. If we plant them in a 100% even field with the same nutrients, water, sunlight, etc at every point in the field, then close to 100% of the variation in the resulting corn plants is genetic (some is just random chance, of course.)
In this population, then, height is nearly 100% heritable.
Let’s repeat the experiment, but this time, we sow our corn in an irregular field. Some patches have good soil; some have bad. Some spots are too dry or too wet. Some are sunny; others shaded. Etc.
Here it gets interesting, because aside from a bit of random chance in the distribution of seeds and environmental response, in most areas of the irregular field, our “tall” corn is still taller than the “short” corn. In the shady areas, both varieties don’t get enough sun, but the tall corn still grows taller. In the nutrient-poor areas, both varieties don’t get enough nutrients, but the tall still grows taller. But when we compare all of the corn all over the field, dwarf corn grown in the best areas grows taller than giant corn grown in the worst areas.
Our analysis of the irregular field leads us to conclude that water, sunlight, nutrients, and genes are all important in determining how tall corn gets.
Height in the irregular field is still heritable–genes are still important–but it is not 100% heritable, because other stuff is important, too.
What does it mean to be 10, 40, or 80% heritable?
If height is 10% heritable, then most of the variety in height you see is due to non-genetic factors, like nutrition. Genes still have an effect–people with tall genes will still, on average, be taller–but environmental effects really dominate–perhaps some people who should have been tall are severely malnourished.
In modern, first world countries, height is about 80% heritable–that is, since most people in first world countries get plenty of food and don’t catch infections that stunt their growth, most of the variation we see is genetic. In some third world countries, however, the heritability of height drops to 65%. These are places where many people do not get the nutrients they need to achieve their full genetic potential.
How do you achieve 0% heritability?
A trait is 0% heritable not if you can’t inherit it, but if genetics explains none of the variation in the sample. Suppose we seeded an irregular field entirely with identical, cloned corn. The height of the resulting corn would would vary from area to area depending on nutrients, sunlight, water, etc. Since the original seeds were 100% genetically identical, all of the variation is environmental. Genes are, of course, important to height–if the relevant genes disappeared from the corn, it would stop growing–but they explain none of the variation in this population.
The heritability of a trait decreases, therefore, as genetic uniformity increases or the environment becomes more unequal. Heritability increases as genetics become more varied or the environment becomes more equal.
Note that the genes involved do not need to code directly for the trait being measured. The taller people in a population, for example, might have lactase persistence genes, which let them extract more calories from the milk they drink than their neighbors. Or they might be thieves who steal food from their neighbors.
I remember a case where investigators were trying to discover why most of the boys at an orphanage had developed pellagra, then a mystery disease, but some hadn’t. It turns out that the boys who hadn’t developed it were sneaking into the kitchen at night and stealing food.
Pellagra is a nutritional deficiency caused by lack of niacin, aka B3. Poor Southerners used to come down with it from eating diets composed solely of (un-nixtamalized) corn for months on end.
The ultimate cause of pellagra is environmental–lack of niacin–but who comes down with pellagra is at least partially determined by genes, because genes influence your likelihood of eating nothing but corn for 6 months straight. Sociopaths who steal the occasional ham, for example, won’t get pellagra, but sociopaths who get caught and sent to badly run prisons, however, increase their odds of getting it. In general, smart people who work hard and earn lots of money significantly decrease their chance of getting it, but smart black people enslaved against their will are more likely to get it. So pellagra is heritable–even though it is ultimately a nutritional deficiency.
What’s the point of heritability?
If you’re breeding corn (or cattle,) it helps to know whether, given good conditions, you can hope to change a trait. Traits with low heritability even under good conditions simply can’t be affected very much by breeding, while traits with high heritability can.
In humans, heritability helps us seek out the ultimate causes of diseases. On a social level, it can help measure how fair a society is, or whether the things we are doing to try to make society better are actually working.
For example, people would love to find a way to make children smarter. From Baby Einstein to HeadStart, people have tried all sorts of things to raise IQ. But beyond making sure that everyone has enough to eat, no nutrient deficiencies, and some kind of education, few of these interventions seem to make much difference.
Here people usually throw in a clarification about the difference between “shared” and “non-shared” environment. Shared environment is stuff you share with other members of your population, like the house your family lives in or the school you and your classmates attend. Non-shared is basically “random stuff,” like the time you caught meningitis but your twin didn’t.
Like anything controversial, people of course argue about the methodology and mathematics of these studies. They also argue about proximate and ultimate causes, and get caught up matters of cultural variation. For example, is wearing glasses heritable? Some would say that it can’t be, because how can you inherit a gene that somehow codes for possessing a newly invented (on the scale of human evolutionary history) object?
But this is basically a fallacy that stems from mixing up proximate and ultimate causes. Obviously there is no gene that makes a pair of glasses grow out of your head, nor one that makes you feel compelled to go and buy them. It is also obvious that not all human populations throughout history have had glasses. But within a population that does have glasses, the chances of you wearing glasses is strongly predicted by whether or not you are nearsighted, and nearsightedness is a remarkable 91% heritable.
Of course, some nearsighted people opt to wear contact lenses, which lowers the heritability estimate for glasses, but the signal is still pretty darn strong, since almost no one who doesn’t have vision difficulties wears glasses.
If we expand our sample population to include people who lived before the invention of eyeglasses, or who live in countries where most people are too poor to afford glasses, then our heritability estimate will drop quite a bit. You can’t buy glasses if they don’t exist, after all, no matter how bad your eyesight it. But the fact that glasses themselves are a recent artifact of particular human cultures does not change the fact that, within those populations, wearing glasses is heritable.
“Heritability” does not depend on whether there is (or we know of ) any direct mechanism for a gene to code for the thing under study. It is only a statistical measure of genetic variation that correlates with the visible variation we’re looking at in a population.
A species may live in relative equilibrium with its environment, hardly changing from generation to generation, for millions of years. Turtles, for example, have barely changed since the Cretaceous, when dinosaurs still roamed the Earth.
But if the environment changes–critically, if selective pressures change–then the species will change, too. This was most famously demonstrated with English moths, which changed color from white-and-black speckled to pure black when pollution darkened the trunks of the trees they lived on. To survive, these moths need to avoid being eaten by birds, so any moth that stands out against the tree trunks tends to get turned into an avian snack. Against light-colored trees, dark-colored moths stood out and were eaten. Against dark-colored trees, light-colored moths stand out.
This change did not require millions of years. Dark-colored moths were virtually unknown in 1810, but by 1895, 98% of the moths were black.
The time it takes for evolution to occur depends simply on A. The frequency of a trait in the population and B. How strongly you are selecting for (or against) it.
Let’s break this down a little bit. Within a species, there exists a great deal of genetic variation. Some of this variation happens because two parents with different genes get together and produce offspring with a combination of their genes. Some of this variation happens because of random errors–mutations–that occur during copying of the genetic code. Much of the “natural variation” we see today started as some kind of error that proved to be useful, or at least not harmful. For example, all humans originally had dark skin similar to modern Africans’, but random mutations in some of the folks who no longer lived in Africa gave them lighter skin, eventually producing “white” and “Asian” skin tones.
(These random mutations also happen in Africa, but there they are harmful and so don’t stick around.)
Natural selection can only act on the traits that are actually present in the population. If we tried to select for “ability to shoot x-ray lasers from our eyes,” we wouldn’t get very far, because no one actually has that mutation. By contrast, albinism is rare, but it definitely exists, and if for some reason we wanted to select for it, we certainly could. (The incidence of albinism among the Hopi Indians is high enough–1 in 200 Hopis vs. 1 in 20,000 Europeans generally and 1 in 30,000 Southern Europeans–for scientists to discuss whether the Hopi have been actively selecting for albinism. This still isn’t a lot of albinism, but since the American Southwest is not a good environment for pale skin, it’s something.)
You will have a much easier time selecting for traits that crop up more frequently in your population than traits that crop up rarely (or never).
Second, we have intensity–and variety–of selective pressure. What % of your population is getting removed by natural selection each year? If 50% of your moths get eaten by birds because they’re too light, you’ll get a much faster change than if only 10% of moths get eaten.
Selection doesn’t have to involve getting eaten, though. Perhaps some of your moths are moth Lotharios, seducing all of the moth ladies with their fuzzy antennae. Over time, the moth population will develop fuzzier antennae as these handsome males out-reproduce their less hirsute cousins.
No matter what kind of selection you have, nor what part of your curve it’s working on, all that ultimately matters is how many offspring each individual has. If white moths have more children than black moths, then you end up with more white moths. If black moths have more babies, then you get more black moths.
So what happens when you completely remove selective pressures from a population?
Back in 1968, ethologist John B. Calhoun set up an experiment popularly called “Mouse Utopia.” Four pairs of mice were given a large, comfortable habitat with no predators and plenty of food and water.
Predictably, the mouse population increased rapidly–once the mice were established in their new homes, their population doubled every 55 days. But after 211 days of explosive growth, reproduction began–mysteriously–to slow. For the next 245 days, the mouse population doubled only once every 145 days.
The birth rate continued to decline. As births and death reached parity, the mouse population stopped growing. Finally the last breeding female died, and the whole colony went extinct.
As I’ve mentioned before Israel is (AFAIK) the only developed country in the world with a TFR above replacement.
It has long been known that overcrowding leads to population stress and reduced reproduction, but overcrowding can only explain why the mouse population began to shrink–not why it died out. Surely by the time there were only a few breeding pairs left, things had become comfortable enough for the remaining mice to resume reproducing. Why did the population not stabilize at some comfortable level?
Professor Bruce Charlton suggests an alternative explanation: the removal of selective pressures on the mouse population resulted in increasing mutational load, until the entire population became too mutated to reproduce.
Unfortunately, randomly changing part of your genetic code is more likely to give you no skin than skintanium armor.
But only the worst genetic problems that never see the light of day. Plenty of mutations merely reduce fitness without actually killing you. Down Syndrome, famously, is caused by an extra copy of chromosome 21.
While a few traits–such as sex or eye color–can be simply modeled as influenced by only one or two genes, many traits–such as height or IQ–appear to be influenced by hundreds or thousands of genes:
Differences in human height is 60–80% heritable, according to several twin studies and has been considered polygenic since the Mendelian-biometrician debate a hundred years ago. A genome-wide association (GWA) study of more than 180,000 individuals has identified hundreds of genetic variants in at least 180 loci associated with adult human height. The number of individuals has since been expanded to 253,288 individuals and the number of genetic variants identified is 697 in 423 genetic loci.
Obviously most of these genes each plays only a small role in determining overall height (and this is of course holding environmental factors constant.) There are a few extreme conditions–gigantism and dwarfism–that are caused by single mutations, but the vast majority of height variation is caused by which particular mix of those 700 or so variants you happen to have.
The general figure for the heritability of IQ, according to an authoritative American Psychological Association report, is 0.45 for children, and rises to around 0.75 for late teens and adults. In simpler terms, IQ goes from being weakly correlated with genetics, for children, to being strongly correlated with genetics for late teens and adults. … Recent studies suggest that family and parenting characteristics are not significant contributors to variation in IQ scores; however, poor prenatal environment, malnutrition and disease can have deleterious effects.…
Despite intelligence having substantial heritability2 (0.54) and a confirmed polygenic nature, initial genetic studies were mostly underpowered3, 4, 5. Here we report a meta-analysis for intelligence of 78,308 individuals. We identify 336 associated SNPs (METAL P < 5 × 10−8) in 18 genomic loci, of which 15 are new. Around half of the SNPs are located inside a gene, implicating 22 genes, of which 11 are new findings. Gene-based analyses identified an additional 30 genes (MAGMA P < 2.73 × 10−6), of which all but one had not been implicated previously. We show that the identified genes are predominantly expressed in brain tissue, and pathway analysis indicates the involvement of genes regulating cell development (MAGMA competitive P = 3.5 × 10−6). Despite the well-known difference in twin-based heritability2 for intelligence in childhood (0.45) and adulthood (0.80), we show substantial genetic correlation (rg = 0.89, LD score regression P = 5.4 × 10−29). These findings provide new insight into the genetic architecture of intelligence.
The greater number of genes influence a trait, the harder they are to identify without extremely large studies, because any small group of people might not even have the same set of relevant genes.
High IQ correlates positively with a number of life outcomes, like health and longevity, while low IQ correlates with negative outcomes like disease, mental illness, and early death. Obviously this is in part because dumb people are more likely to make dumb choices which lead to death or disease, but IQ also correlates with choice-free matters like height and your ability to quickly press a button. Our brains are not some mysterious entities floating in a void, but physical parts of our bodies, and anything that affects our overall health and physical functioning is likely to also have an effect on our brains.
The study focused, for the first time, on rare, functional SNPs – rare because previous research had only considered common SNPs and functional because these are SNPs that are likely to cause differences in the creation of proteins.
The researchers did not find any individual protein-altering SNPs that met strict criteria for differences between the high-intelligence group and the control group. However, for SNPs that showed some difference between the groups, the rare allele was less frequently observed in the high intelligence group. This observation is consistent with research indicating that rare functional alleles are more often detrimental than beneficial to intelligence.
Greg Cochran has some interesting Thoughts on Genetic Load. (Currently, the most interesting candidate genes for potentially increasing IQ also have terrible side effects, like autism, Tay Sachs and Torsion Dystonia. The idea is that–perhaps–if you have only a few genes related to the condition, you get an IQ boost, but if you have too many, you get screwed.) Of course, even conventional high-IQ has a cost: increased maternal mortality (larger heads).
the difference between the fitness of an average genotype in a population and the fitness of some reference genotype, which may be either the best present in a population, or may be the theoretically optimal genotype. … Deleterious mutation load is the main contributing factor to genetic load overall. Most mutations are deleterious, and occur at a high rate.
There’s math, if you want it.
Normally, genetic mutations are removed from the population at a rate determined by how bad they are. Really bad mutations kill you instantly, and so are never born. Slightly less bad mutations might survive, but never reproduce. Mutations that are only a little bit deleterious might have no obvious effect, but result in having slightly fewer children than your neighbors. Over many generations, this mutation will eventually disappear.
(Some mutations are more complicated–sickle cell, for example, is protective against malaria if you have only one copy of the mutation, but gives you sickle cell anemia if you have two.)
Throughout history, infant mortality was our single biggest killer. For example, here is some data from Jakubany, a town in the Carpathian Mountains:
We can see that, prior to the 1900s, the town’s infant mortality rate stayed consistently above 20%, and often peaked near 80%.
When I first ran a calculation of the infant mortality rate, I could not believe certain of the intermediate results. I recompiled all of the data and recalculated … with the same astounding result – 50.4% of the children born in Jakubany between the years 1772 and 1890 would diebefore reaching ten years of age! …one out of every two! Further, over the same 118 year period, of the 13306 children who were born, 2958 died (~22 %) before reaching the age of one.
Historical infant mortality rates can be difficult to calculate in part because they were so high, people didn’t always bother to record infant deaths. And since infants are small and their bones delicate, their burials are not as easy to find as adults’. Nevertheless, Wikipedia estimates that Paleolithic man had an average life expectancy of 33 years:
Based on the data from recent hunter-gatherer populations, it is estimated that at 15, life expectancy was an additional 39 years (total 54), with a 0.60 probability of reaching 15.
In other words, a 40% chance of dying in childhood. (Not exactly the same as infant mortality, but close.)
Wikipedia gives similarly dismal stats for life expectancy in the Neolithic (20-33), Bronze and Iron ages (26), Classical Greece(28 or 25), Classical Rome (20-30), Pre-Columbian Southwest US (25-30), Medieval Islamic Caliphate (35), Late Medieval English Peerage (30), early modern England (33-40), and the whole world in 1900 (31).
Over at ThoughtCo: Surviving Infancy in the Middle Ages, the author reports estimates for between 30 and 50% infant mortality rates. I recall a study on Anasazi nutrition which I sadly can’t locate right now, which found 100% malnutrition rates among adults (based on enamel hypoplasias,) and 50% infant mortality.
As Priceonomics notes, the main driver of increasing global life expectancy–48 years in 1950 and 71.5 years in 2014 (according to Wikipedia)–has been a massive decrease in infant mortality. The average life expectancy of an American newborn back in 1900 was only 47 and a half years, whereas a 60 year old could expect to live to be 75. In 1998, the average infant could expect to live to about 75, and the average 60 year old could expect to live to about 80.
Michael A Woodley suggests that what was going on [in the Mouse experiment] was much more likely to be mutation accumulation; with deleterious (but non-fatal) genes incrementally accumulating with each generation and generating a wide range of increasingly maladaptive behavioural pathologies; this process rapidly overwhelming and destroying the population before any beneficial mutations could emerge to ‘save; the colony from extinction. …
The reason why mouse utopia might produce so rapid and extreme a mutation accumulation is that wild mice naturally suffer very high mortality rates from predation. …
Thus mutation selection balance is in operation among wild mice, with very high mortality rates continually weeding-out the high rate of spontaneously-occurring new mutations (especially among males) – with typically only a small and relatively mutation-free proportion of the (large numbers of) offspring surviving to reproduce; and a minority of the most active and healthy (mutation free) males siring the bulk of each generation.
However, in Mouse Utopia, there is no predation and all the other causes of mortality (eg. Starvation, violence from other mice) are reduced to a minimum – so the frequent mutations just accumulate, generation upon generation – randomly producing all sorts of pathological (maladaptive) behaviours.
Today, almost everyone in the developed world has plenty of food, a comfortable home, and doesn’t have to worry about dying of bubonic plague. We live in humantopia, where the biggest factor influencing how many kids you have is how many you want to have.
Back in 1930, infant mortality rates were highest among the children of unskilled manual laborers, and lowest among the children of professionals (IIRC, this is Brittish data.) Today, infant mortality is almost non-existent, but voluntary childlessness has now inverted this phenomena:
Yes, the percent of childless women appears to have declined since 1994, but the overall pattern of who is having children still holds. Further, while only 8% of women with post graduate degrees have 4 or more children, 26% of those who never graduated from highschool have 4+ kids. Meanwhile, the age of first-time moms has continued to climb.
Take a moment to consider the high-infant mortality situation: an average couple has a dozen children. Four of them, by random good luck, inherit a good combination of the couple’s genes and turn out healthy and smart. Four, by random bad luck, get a less lucky combination of genes and turn out not particularly healthy or smart. And four, by very bad luck, get some unpleasant mutations that render them quite unhealthy and rather dull.
Infant mortality claims half their children, taking the least healthy. They are left with 4 bright children and 2 moderately intelligent children. The three brightest children succeed at life, marry well, and end up with several healthy, surviving children of their own, while the moderately intelligent do okay and end up with a couple of children.
On average, society’s overall health and IQ should hold steady or even increase over time, depending on how strong the selective pressures actually are.
Or consider a consanguineous couple with a high risk of genetic birth defects: perhaps a full 80% of their children die, but 20% turn out healthy and survive.
Today, by contrast, your average couple has two children. One of them is lucky, healthy, and smart. The other is unlucky, unhealthy, and dumb. Both survive. The lucky kid goes to college, majors in underwater intersectionist basket-weaving, and has one kid at age 40. That kid has Down Syndrome and never reproduces. The unlucky kid can’t keep a job, has chronic health problems, and 3 children by three different partners.
Your consanguineous couple migrates from war-torn Somalia to Minnesota. They still have 12 kids, but three of them are autistic with IQs below the official retardation threshold. “We never had this back in Somalia,” they cry. “We don’t even have a word for it.”
People normally think of dysgenics as merely “the dumb outbreed the smart,” but genetic load applies to everyone–men and women, smart and dull, black and white, young and especially old–because we all make random transcription errors when copying our DNA.
I could offer a list of signs of increasing genetic load, but there’s no way to avoid cherry-picking trends I already know are happening, like falling sperm counts or rising (diagnosed) autism rates, so I’ll skip that. You may substitute your own list of “obvious signs society is falling apart at the genes” if you so desire.
Nevertheless, the transition from 30% (or greater) infant mortality to almost 0% is amazing, both on a technical level and because it heralds an unprecedented era in human evolution. The selective pressures on today’s people are massively different from those our ancestors faced, simply because our ancestors’ biggest filter was infant mortality. Unless infant mortality acted completely at random–taking the genetically loaded and unloaded alike–or on factors completely irrelevant to load, the elimination of infant mortality must continuously increase the genetic load in the human population. Over time, if that load is not selected out–say, through more people being too unhealthy to reproduce–then we will end up with an increasing population of physically sick, maladjusted, mentally ill, and low-IQ people.
If all of the above is correct, then I see only 4 ways out:
Do nothing: Genetic load increases until the population is non-functional and collapses, resulting in a return of Malthusian conditions, invasion by stronger neighbors, or extinction.
Sterilization or other weeding out of high-load people, coupled with higher fertility by low-load people
Abortion of high load fetuses
#1 sounds unpleasant, and #2 would result in masses of unhappy people. We don’t have the technology for #4, yet. I don’t think the technology is quite there for #2, either, but it’s much closer–we can certainly test for many of the deleterious mutations that we do know of.
Continuing with our discussion of Leuconoe’s question:
What is your opinion of the “racial invariance hypothesis” which says that poor whites have about the same crime rate as poor blacks and that if you control for socioeconomic status all the differences between the races in crime go away?
First, let’s be clear about what HBD says (and doesn’t say) about crime, race, and poverty (and while we’re at it, IQ):
Genes influence traits like IQ and criminality.
As JayMan is fond of saying, “All human behavioral traits are heritable.” Okay, but what does this mean? Are we slaves to our genetics? Is there a a murder gene that guarantees that you will go out and stab someone to death? Since JayMan has already written a great explanation, I will quote him and urge you to read the rest:
The First Law emerges from studies of twins, studies of adoptees, and (now) sibling genetic similarity studies. In short, when you look at people’s behavior, virtually without exception … you find some effect of the genes on these traits….
How could this be, you may ask? How could such complex and highly specific things be encoded in the DNA and express themselves despite decades of upbringing and childhood experiences? For one, heritability is only probabilistic, not absolute. Few traits are 100% heritable. …
But, it’s important to understand the meaning of the term heritability. Heritability is the degree of variation in a studied population that can be attributed to genetic variation in that population. The cause is the variance in question is always due to some genetic difference, but it doesn’t tell you how direct such genetic influence is. …
So, how iron-clad is the First Law? Clearly, not alltraits are heritable, right? Right. However, there are only a distinct set of exceptions. Traits that are dependent on content aren’t heritable at all. These include what language you speak, in which particular church you worship, what specific political party you identify. However, the degree and manner to which one interacts with these things are very heritable: how proficient you are with language, how church-going you are, how liberal or conservative.
Note that these are not 100% heritable. There is no “guaranteed to stab people” gene, but there are genes that will make you more likely to want to stab people. Environment, “free will,” and random chance also influence how personality traits manifest in individuals.
Edit: It occurs to me that I should actually talk about some of these genes.
An MAOA variant, nicknamed “the warrior gene,” is the most famous of these. Wikipedia states:
A version of the monoamine oxidase-A gene has been popularly referred to as the warrior gene. Several different versions of the gene are found in different individuals, although a functional gene is present in most humans (with the exception of a few individuals with Brunner syndrome). In the variant, the allele associated with behavioural traits is shorter (30 bases) and may produce less MAO-A enzyme. This gene variation is in a regulatory promoter region about 1000 bases from the start of the region that encodes the MAO-A enzyme.
Studies have found differences in the frequency distribution of variants of the MAOA gene between ethnic groups: of the participants, 59% of Black men, 54% of Chinese men, 56% of Maori men, and 34% of Caucasian men carried the 3R allele, while 5.5% of Black men, 0.1% of Caucasian men, and 0.00067% of Asian men carried the 2R allele.
In individuals with the low activity MAOA gene, when faced with social exclusion or ostracism showed higher levels of aggression than individuals with the high activity MAOA gene.
Doubtless there are other genes I’m not aware of.
2. The frequency of different genes varies between genetically-related groups.
The obvious genes here are ones that code for environmental responses, like lactase persistence in groups that have historically practiced dairy farming and dark skin in areas with intense sunlight.
Everyone on earth shares more genes with the people closely related to them than people less-closely related. For example, the Amish are more genetically similar to other Amish than non-Amish. Pygmies are more closely related to other pygmies than non-pygmies. This is why people look like their parents.
There are a lot of people who claim that “race is a social construct.” From a genetic standpoint, this is simply untrue (look at the top of the blog for an example of how geneticists can distinguish between different genetic groups.)
3. The HBD-theory is that the genes for personality/behavioral traits also vary by genetically-related groups, due to historical environmental (including cultural!) pressures.
For example, Polynesians may have been selected for navigational ability, because good navigators populated Polynesia and bad navigators died at sea. Chinese culture may have selected for people willing to work hard and get along even when they don’t really feel like it; and the Inuit may have been selected for the ability to stand really long, dark winters.
Relevant to our discussion, crime rates vary a lot by region:
We’ve discussed warfare in pre-state societies over quite a few posts lately, so I’m going to summarize quickly: anthropological, historical, and archaeological records all document extremely high rates of violence in non-state societies. Anthropologist Napoleon Chagnon actually kept track of both homicides and births among the Yanomamo, and found that Yanomamo men who had killed more people had more children than Yanomamo men who had killed fewer people, providing a direct mechanism for genetic selection for traits related to homicide and other violence.
Many HBD bloggers, such as Peter Frost and HBD Chick, have discussed the ways in which states have discouraged crime, including (especially) executing criminals and thus preventing them from having children. The observed result:
That all said, there are things that no serious HBD-er claims:
A. That all people or sub-groups within a “race” are identical. As Peter Frost wrote, “No, blacks aren’t all alike. Who said they are?” There are smart black people and dumb black people. Hard-working whites and lazy whites. Extroverted Asians and Introverted Asians. Some white groups (like Russians, apparently,) have significantly higher crime rates than other white groups. Even within the US, there are differences between different groups of whites, with significant ethnic divisions between classes and regions.
B. That environmental effects don’t exist or that humans do not respond to incentives. Obviously if it is cold outside I will wear a coat; if a law is passed that jay walkers will be executed, I will immediately stop jaywalking.
C. Observed differences are set in stone. The world is always changing. Where selection pressures change, so do populations.
So to get back to Leuconoe’s first query, I would not be surprised if controlling for socioeconomic status made all (or most) racial differences in criminality disappear. In fact, this is basically what I would expect, because poverty, criminality, and low-IQ are correlated, so controlling for one will tend to control for all of them.
But why on earth would you do this? If we control for bad decisions, most differences in intelligence disappear. If you control for enough differences, differences disappear. But as JayMan says, you can’t just control for a groups entire history; likewise, you can’t just control for all their traits.
Moreover, this still doesn’t get at why different groups have different rates of criminality or poverty in the first place, nor whether A causes B, B causes A, or C causes A and B. And even if you could prove that poverty causes crime, you still haven’t answered why there’s so much more poverty in black communities than in white (or Asian) ones.
The evidence suggests that if there is police racial bias in arrests it is negligible. Victim and witness surveys show that police arrest violent criminals in close proportion to the rates at which criminals of different races commit violent crimes.
There are dramatic race differences in crime rates. Asians have the lowest rates, followed by whites, and then Hispanics. Blacks have notably high crime rates. This pattern holds true for virtually all crime categories and for virtually all age groups.
In 2013, of the approximately 660,000 crimes of interracial violence that involved blacks and whites, blacks were the perpetrators 85 percent of the time. This meant a black person was 27 times more likely to attack a white person than vice versa. A Hispanic was eight times more likely to attack a white person than vice versa.
If New York City were all white, the murder rate would drop by 91 percent, the robbery rate by 81 percent, and the shootings rate by 97 percent.
Both violent and non-violent crime has been declining in the United States since a high in 1993. 2015 saw a disturbing rise in murder in major American cities that some observers associated with “depolicing” in response to intense media and public scrutiny of police activity.
So much for controlling for income. It looks like equally poor whites and blacks still have massively different homicide rates. (Of course, I should note that the US welfare system attempts to put a minimum floor below which people don’t fall. Without intervention, equally poor whites and blacks might be more similar.)
Lotteries can be useful natural experiments; we can use them to test the accuracy of standard sociological theories, in which rich people buy their kids extra smarts, bigger brains, better health, etc.
David Cesarini, who I met at that Chicago meeting, has looked at the effect of winning the lottery in Sweden. He found that the “effects of parental wealth on infant health, drug consumption, scholastic performance and cognitive and non-cognitive skills can be bounded to a tight interval around zero.”
As I once mentioned, there was an important land lottery in Georgia in 1832. The winners received an 160-acre farm. But by 1880, their descendants were no more literate, their occupational status no higher. The families in the top 2/3rds of income managed to hang on to some of their windfall, but lower-income families did not.
West Hunter does note that there is probably a level below which material deprivation really will harm (or kill) you, and that a random windfall in such a situation will do you good, but virtually no one in the modern West lives in famine or near-famine conditions.
(I suspect it is really easy to catch car thieves in Hawaii.)
Occam’s razor suggests that something is going on here.