Using a mobile-based virtual reality navigation task, we measured spatial navigation ability in more than 2.5 million people globally. Using a clustering approach, we find that navigation ability is not smoothly distributed globally but clustered into five distinct yet geographically related groups of countries. Furthermore, the economic wealth of a nation (Gross Domestic Product per capita) was predictive of the average navigation ability of its inhabitants and gender inequality (Gender Gap Index) was predictive of the size of performance difference between males and females. Thus, cognitive abilities, at least for spatial navigation, are clustered according to economic wealth and gender inequalities globally.
This is an incredible study. They got 2.5 million people from all over the world to participate.
If you’ve been following any of the myriad debates about intelligence, IQ, and education, you’re probably familiar with the concept of “multiple intelligences” and the fact that there’s rather little evidence that people actually have “different intelligences” that operate separately from each other. In general, it looks like people who have brains that are good at working out how to do one kind of task tend to be good at working out other sorts of tasks.
I’ve long held navigational ability as a possible exception to this: perhaps people in, say, Polynesian societies depended historically far more on navigational abilities than the rest of us, even though math and literacy were nearly absent.
Unfortunately, it doesn’t look like the authors got enough samples from Polynesia to include it in the study, but they did get data from Indonesia and the Philippines, which I’ll return to in a moment.
Frankly, I don’t see what the authors mean by “five distinct yet geographically related groups of countries.” South Korea is ranked between the UK and Belgium; Russia is next to Malaysia; Indonesia is next to Portugal and Hungary.
GDP per capita appears to be a stronger predictor than geography:
Some people will say these results merely reflect experience playing video games–people in wealthier countries have probably spent more time and money on computers and games. But assuming that the people who are participating in the study in the first place are people who have access to smartphones, computers, video games, etc., the results are not good for the multiple-intelligences hypothesis.
In the GDP per Capita vs. Conditional Modes (ie how well a nation scored overall, with low scores better than high scores) graph, countries above the trend line are under-performing relative to their GDPs, and countries below the line are over-performing relative to their GDPs.
South Africa, for example, significantly over-performs relative to its GDP, probably due to sampling bias: white South Africans with smartphones and computers were probably more likely to participate in the study than the nation’s 90% black population, but the GDP reflects the entire population. Finland and New Zealand are also under-performing economically, perhaps because Finland is really cold and NZ is isolated.
On the other side of the line, the UAE, Saudi Arabia, and Greece over-perform relative to GDP. Two of these are oil states that would be much poorer if not for geographic chance, and as far as I can tell, the whole Greek economy is being propped up by German loans. (There is also evidence that Greek IQ is falling, though this may be a near universal problem in developed nations.)
Three other nations stand out in the “scoring better than GDP predicts” category: Ukraine, (which suffered under Communism–Communism seems to do bad things to countries,) Indonesia and the Philippines. While we could be looking at selection bias similar to South Africa, these are island nations in which navigational ability surely had some historical effect on people’s ability to survive.
Indonesia and the Philippines still didn’t do as well as first-world nations like Norway and Canada, but they outperformed other nations with similar GDPs like Egypt, India, and Macedonia. This is the best evidence I know of for independent selection for navigational ability in some populations.
The study’s other interesting findings were that women performed consistently worse than men, both across countries and age groups (except for the post-90 cohort, but that might just be an error in the data.) Navigational ability declines steeply for everyone post-23 years old until about 75 years; the authors suggest the subsequent increase in abilities post-70s might be sampling error due to old people who are good at video games being disproportionately likely to seek out video game related challenges.
The authors note that people who drive more (eg, the US and Canada) might do better on navigational tasks than people who use public transportation more (eg, Europeans) but also that Finno-Scandians are among the world’s best navigators despite heavy use of public transport in those countries. The authors write:
We speculate that this specificity may be linked to Nordic countries sharing a culture of participating in a sport related to navigation: orienteering. Invented as an official sport in the late 19th century in Sweden, the first orienteering competition open to the public was held in Norway in 1897. Since then, it has been more popular in Nordic countries than anywhere else in the world, and is taught in many schools . We found that ‘orienteering world championship’ country results significantly correlated with countries’ CM (Pearson’s correlation ρ = .55, p = .01), even after correcting for GDP per capita (see Extended Data Fig. 15). Future targeted research will be required to evaluate the impact of cultural activities on navigation skill.
I suggest a different causal relationship: people make hobbies out of things they’re already good at and enjoy doing, rather than things they’re bad at.
Please note that the study doesn’t look at a big chunk of countries, like most of Africa. Being at the bottom in navigational abilities in this study by no means indicates that a country is at the bottom globally–given the trends already present in the data, it is likely that the poorer countries that weren’t included in the study would do even worse.
A species may live in relative equilibrium with its environment, hardly changing from generation to generation, for millions of years. Turtles, for example, have barely changed since the Cretaceous, when dinosaurs still roamed the Earth.
But if the environment changes–critically, if selective pressures change–then the species will change, too. This was most famously demonstrated with English moths, which changed color from white-and-black speckled to pure black when pollution darkened the trunks of the trees they lived on. To survive, these moths need to avoid being eaten by birds, so any moth that stands out against the tree trunks tends to get turned into an avian snack. Against light-colored trees, dark-colored moths stood out and were eaten. Against dark-colored trees, light-colored moths stand out.
This change did not require millions of years. Dark-colored moths were virtually unknown in 1810, but by 1895, 98% of the moths were black.
The time it takes for evolution to occur depends simply on A. The frequency of a trait in the population and B. How strongly you are selecting for (or against) it.
Let’s break this down a little bit. Within a species, there exists a great deal of genetic variation. Some of this variation happens because two parents with different genes get together and produce offspring with a combination of their genes. Some of this variation happens because of random errors–mutations–that occur during copying of the genetic code. Much of the “natural variation” we see today started as some kind of error that proved to be useful, or at least not harmful. For example, all humans originally had dark skin similar to modern Africans’, but random mutations in some of the folks who no longer lived in Africa gave them lighter skin, eventually producing “white” and “Asian” skin tones.
(These random mutations also happen in Africa, but there they are harmful and so don’t stick around.)
Natural selection can only act on the traits that are actually present in the population. If we tried to select for “ability to shoot x-ray lasers from our eyes,” we wouldn’t get very far, because no one actually has that mutation. By contrast, albinism is rare, but it definitely exists, and if for some reason we wanted to select for it, we certainly could. (The incidence of albinism among the Hopi Indians is high enough–1 in 200 Hopis vs. 1 in 20,000 Europeans generally and 1 in 30,000 Southern Europeans–for scientists to discuss whether the Hopi have been actively selecting for albinism. This still isn’t a lot of albinism, but since the American Southwest is not a good environment for pale skin, it’s something.)
You will have a much easier time selecting for traits that crop up more frequently in your population than traits that crop up rarely (or never).
Second, we have intensity–and variety–of selective pressure. What % of your population is getting removed by natural selection each year? If 50% of your moths get eaten by birds because they’re too light, you’ll get a much faster change than if only 10% of moths get eaten.
Selection doesn’t have to involve getting eaten, though. Perhaps some of your moths are moth Lotharios, seducing all of the moth ladies with their fuzzy antennae. Over time, the moth population will develop fuzzier antennae as these handsome males out-reproduce their less hirsute cousins.
No matter what kind of selection you have, nor what part of your curve it’s working on, all that ultimately matters is how many offspring each individual has. If white moths have more children than black moths, then you end up with more white moths. If black moths have more babies, then you get more black moths.
So what happens when you completely remove selective pressures from a population?
Back in 1968, ethologist John B. Calhoun set up an experiment popularly called “Mouse Utopia.” Four pairs of mice were given a large, comfortable habitat with no predators and plenty of food and water.
Predictably, the mouse population increased rapidly–once the mice were established in their new homes, their population doubled every 55 days. But after 211 days of explosive growth, reproduction began–mysteriously–to slow. For the next 245 days, the mouse population doubled only once every 145 days.
The birth rate continued to decline. As births and death reached parity, the mouse population stopped growing. Finally the last breeding female died, and the whole colony went extinct.
As I’ve mentioned before Israel is (AFAIK) the only developed country in the world with a TFR above replacement.
It has long been known that overcrowding leads to population stress and reduced reproduction, but overcrowding can only explain why the mouse population began to shrink–not why it died out. Surely by the time there were only a few breeding pairs left, things had become comfortable enough for the remaining mice to resume reproducing. Why did the population not stabilize at some comfortable level?
Professor Bruce Charlton suggests an alternative explanation: the removal of selective pressures on the mouse population resulted in increasing mutational load, until the entire population became too mutated to reproduce.
Unfortunately, randomly changing part of your genetic code is more likely to give you no skin than skintanium armor.
But only the worst genetic problems that never see the light of day. Plenty of mutations merely reduce fitness without actually killing you. Down Syndrome, famously, is caused by an extra copy of chromosome 21.
While a few traits–such as sex or eye color–can be simply modeled as influenced by only one or two genes, many traits–such as height or IQ–appear to be influenced by hundreds or thousands of genes:
Differences in human height is 60–80% heritable, according to several twin studies and has been considered polygenic since the Mendelian-biometrician debate a hundred years ago. A genome-wide association (GWA) study of more than 180,000 individuals has identified hundreds of genetic variants in at least 180 loci associated with adult human height. The number of individuals has since been expanded to 253,288 individuals and the number of genetic variants identified is 697 in 423 genetic loci.
Obviously most of these genes each plays only a small role in determining overall height (and this is of course holding environmental factors constant.) There are a few extreme conditions–gigantism and dwarfism–that are caused by single mutations, but the vast majority of height variation is caused by which particular mix of those 700 or so variants you happen to have.
The general figure for the heritability of IQ, according to an authoritative American Psychological Association report, is 0.45 for children, and rises to around 0.75 for late teens and adults. In simpler terms, IQ goes from being weakly correlated with genetics, for children, to being strongly correlated with genetics for late teens and adults. … Recent studies suggest that family and parenting characteristics are not significant contributors to variation in IQ scores; however, poor prenatal environment, malnutrition and disease can have deleterious effects.…
Despite intelligence having substantial heritability2 (0.54) and a confirmed polygenic nature, initial genetic studies were mostly underpowered3, 4, 5. Here we report a meta-analysis for intelligence of 78,308 individuals. We identify 336 associated SNPs (METAL P < 5 × 10−8) in 18 genomic loci, of which 15 are new. Around half of the SNPs are located inside a gene, implicating 22 genes, of which 11 are new findings. Gene-based analyses identified an additional 30 genes (MAGMA P < 2.73 × 10−6), of which all but one had not been implicated previously. We show that the identified genes are predominantly expressed in brain tissue, and pathway analysis indicates the involvement of genes regulating cell development (MAGMA competitive P = 3.5 × 10−6). Despite the well-known difference in twin-based heritability2 for intelligence in childhood (0.45) and adulthood (0.80), we show substantial genetic correlation (rg = 0.89, LD score regression P = 5.4 × 10−29). These findings provide new insight into the genetic architecture of intelligence.
The greater number of genes influence a trait, the harder they are to identify without extremely large studies, because any small group of people might not even have the same set of relevant genes.
High IQ correlates positively with a number of life outcomes, like health and longevity, while low IQ correlates with negative outcomes like disease, mental illness, and early death. Obviously this is in part because dumb people are more likely to make dumb choices which lead to death or disease, but IQ also correlates with choice-free matters like height and your ability to quickly press a button. Our brains are not some mysterious entities floating in a void, but physical parts of our bodies, and anything that affects our overall health and physical functioning is likely to also have an effect on our brains.
The study focused, for the first time, on rare, functional SNPs – rare because previous research had only considered common SNPs and functional because these are SNPs that are likely to cause differences in the creation of proteins.
The researchers did not find any individual protein-altering SNPs that met strict criteria for differences between the high-intelligence group and the control group. However, for SNPs that showed some difference between the groups, the rare allele was less frequently observed in the high intelligence group. This observation is consistent with research indicating that rare functional alleles are more often detrimental than beneficial to intelligence.
Greg Cochran has some interesting Thoughts on Genetic Load. (Currently, the most interesting candidate genes for potentially increasing IQ also have terrible side effects, like autism, Tay Sachs and Torsion Dystonia. The idea is that–perhaps–if you have only a few genes related to the condition, you get an IQ boost, but if you have too many, you get screwed.) Of course, even conventional high-IQ has a cost: increased maternal mortality (larger heads).
the difference between the fitness of an average genotype in a population and the fitness of some reference genotype, which may be either the best present in a population, or may be the theoretically optimal genotype. … Deleterious mutation load is the main contributing factor to genetic load overall. Most mutations are deleterious, and occur at a high rate.
There’s math, if you want it.
Normally, genetic mutations are removed from the population at a rate determined by how bad they are. Really bad mutations kill you instantly, and so are never born. Slightly less bad mutations might survive, but never reproduce. Mutations that are only a little bit deleterious might have no obvious effect, but result in having slightly fewer children than your neighbors. Over many generations, this mutation will eventually disappear.
(Some mutations are more complicated–sickle cell, for example, is protective against malaria if you have only one copy of the mutation, but gives you sickle cell anemia if you have two.)
Throughout history, infant mortality was our single biggest killer. For example, here is some data from Jakubany, a town in the Carpathian Mountains:
We can see that, prior to the 1900s, the town’s infant mortality rate stayed consistently above 20%, and often peaked near 80%.
When I first ran a calculation of the infant mortality rate, I could not believe certain of the intermediate results. I recompiled all of the data and recalculated … with the same astounding result – 50.4% of the children born in Jakubany between the years 1772 and 1890 would diebefore reaching ten years of age! …one out of every two! Further, over the same 118 year period, of the 13306 children who were born, 2958 died (~22 %) before reaching the age of one.
Historical infant mortality rates can be difficult to calculate in part because they were so high, people didn’t always bother to record infant deaths. And since infants are small and their bones delicate, their burials are not as easy to find as adults’. Nevertheless, Wikipedia estimates that Paleolithic man had an average life expectancy of 33 years:
Based on the data from recent hunter-gatherer populations, it is estimated that at 15, life expectancy was an additional 39 years (total 54), with a 0.60 probability of reaching 15.
In other words, a 40% chance of dying in childhood. (Not exactly the same as infant mortality, but close.)
Wikipedia gives similarly dismal stats for life expectancy in the Neolithic (20-33), Bronze and Iron ages (26), Classical Greece(28 or 25), Classical Rome (20-30), Pre-Columbian Southwest US (25-30), Medieval Islamic Caliphate (35), Late Medieval English Peerage (30), early modern England (33-40), and the whole world in 1900 (31).
Over at ThoughtCo: Surviving Infancy in the Middle Ages, the author reports estimates for between 30 and 50% infant mortality rates. I recall a study on Anasazi nutrition which I sadly can’t locate right now, which found 100% malnutrition rates among adults (based on enamel hypoplasias,) and 50% infant mortality.
As Priceonomics notes, the main driver of increasing global life expectancy–48 years in 1950 and 71.5 years in 2014 (according to Wikipedia)–has been a massive decrease in infant mortality. The average life expectancy of an American newborn back in 1900 was only 47 and a half years, whereas a 60 year old could expect to live to be 75. In 1998, the average infant could expect to live to about 75, and the average 60 year old could expect to live to about 80.
Michael A Woodley suggests that what was going on [in the Mouse experiment] was much more likely to be mutation accumulation; with deleterious (but non-fatal) genes incrementally accumulating with each generation and generating a wide range of increasingly maladaptive behavioural pathologies; this process rapidly overwhelming and destroying the population before any beneficial mutations could emerge to ‘save; the colony from extinction. …
The reason why mouse utopia might produce so rapid and extreme a mutation accumulation is that wild mice naturally suffer very high mortality rates from predation. …
Thus mutation selection balance is in operation among wild mice, with very high mortality rates continually weeding-out the high rate of spontaneously-occurring new mutations (especially among males) – with typically only a small and relatively mutation-free proportion of the (large numbers of) offspring surviving to reproduce; and a minority of the most active and healthy (mutation free) males siring the bulk of each generation.
However, in Mouse Utopia, there is no predation and all the other causes of mortality (eg. Starvation, violence from other mice) are reduced to a minimum – so the frequent mutations just accumulate, generation upon generation – randomly producing all sorts of pathological (maladaptive) behaviours.
Today, almost everyone in the developed world has plenty of food, a comfortable home, and doesn’t have to worry about dying of bubonic plague. We live in humantopia, where the biggest factor influencing how many kids you have is how many you want to have.
Back in 1930, infant mortality rates were highest among the children of unskilled manual laborers, and lowest among the children of professionals (IIRC, this is Brittish data.) Today, infant mortality is almost non-existent, but voluntary childlessness has now inverted this phenomena:
Yes, the percent of childless women appears to have declined since 1994, but the overall pattern of who is having children still holds. Further, while only 8% of women with post graduate degrees have 4 or more children, 26% of those who never graduated from highschool have 4+ kids. Meanwhile, the age of first-time moms has continued to climb.
Take a moment to consider the high-infant mortality situation: an average couple has a dozen children. Four of them, by random good luck, inherit a good combination of the couple’s genes and turn out healthy and smart. Four, by random bad luck, get a less lucky combination of genes and turn out not particularly healthy or smart. And four, by very bad luck, get some unpleasant mutations that render them quite unhealthy and rather dull.
Infant mortality claims half their children, taking the least healthy. They are left with 4 bright children and 2 moderately intelligent children. The three brightest children succeed at life, marry well, and end up with several healthy, surviving children of their own, while the moderately intelligent do okay and end up with a couple of children.
On average, society’s overall health and IQ should hold steady or even increase over time, depending on how strong the selective pressures actually are.
Or consider a consanguineous couple with a high risk of genetic birth defects: perhaps a full 80% of their children die, but 20% turn out healthy and survive.
Today, by contrast, your average couple has two children. One of them is lucky, healthy, and smart. The other is unlucky, unhealthy, and dumb. Both survive. The lucky kid goes to college, majors in underwater intersectionist basket-weaving, and has one kid at age 40. That kid has Down Syndrome and never reproduces. The unlucky kid can’t keep a job, has chronic health problems, and 3 children by three different partners.
Your consanguineous couple migrates from war-torn Somalia to Minnesota. They still have 12 kids, but three of them are autistic with IQs below the official retardation threshold. “We never had this back in Somalia,” they cry. “We don’t even have a word for it.”
People normally think of dysgenics as merely “the dumb outbreed the smart,” but genetic load applies to everyone–men and women, smart and dull, black and white, young and especially old–because we all make random transcription errors when copying our DNA.
I could offer a list of signs of increasing genetic load, but there’s no way to avoid cherry-picking trends I already know are happening, like falling sperm counts or rising (diagnosed) autism rates, so I’ll skip that. You may substitute your own list of “obvious signs society is falling apart at the genes” if you so desire.
Nevertheless, the transition from 30% (or greater) infant mortality to almost 0% is amazing, both on a technical level and because it heralds an unprecedented era in human evolution. The selective pressures on today’s people are massively different from those our ancestors faced, simply because our ancestors’ biggest filter was infant mortality. Unless infant mortality acted completely at random–taking the genetically loaded and unloaded alike–or on factors completely irrelevant to load, the elimination of infant mortality must continuously increase the genetic load in the human population. Over time, if that load is not selected out–say, through more people being too unhealthy to reproduce–then we will end up with an increasing population of physically sick, maladjusted, mentally ill, and low-IQ people.
If all of the above is correct, then I see only 4 ways out:
Do nothing: Genetic load increases until the population is non-functional and collapses, resulting in a return of Malthusian conditions, invasion by stronger neighbors, or extinction.
Sterilization or other weeding out of high-load people, coupled with higher fertility by low-load people
Abortion of high load fetuses
#1 sounds unpleasant, and #2 would result in masses of unhappy people. We don’t have the technology for #4, yet. I don’t think the technology is quite there for #2, either, but it’s much closer–we can certainly test for many of the deleterious mutations that we do know of.
I recently received a few IQ-related questions. Now, IQ is not my specialty, so I do not feel particularly adequate for the task, but I’ll do my best. I recommend anyone really interested in the subject read Pumpkin Person’s blog, as he really enjoys talking about IQ all the time.
I wanted to ask if you know any IQ test on the internet that is an equivalent to the reliable tests given by psychologists?
I suppose it depends on what you want the test for. Curiosity? Diagnosis? Personally, I suspect that the average person isn’t going to learn very much from an IQ test that they didn’t already know just from living (similarly, I don’t think you’re going to discover that you’re an introvert or extrovert by taking an online quiz if you didn’t know it already from interacting with people,) but there are cases where people might want to take an IQ test, so let’s get searching.
The Wechsler Adult Intelligence Scale (WAIS) is an IQ test designed to measure intelligence and cognitive ability in adults and older adolescents. The original WAIS (Form I) was published in February 1955 by David Wechsler, as a revision of the Wechsler–Bellevue Intelligence Scale, released in 1939. It is currently in its fourth edition (WAIS-IV) released in 2008 by Pearson, and is the most widely used IQ test, for both adults and older adolescents, in the world.
Since IQ tests excite popular interest but no one really wants to pay $1,000 just to take a test, the internet is littered with “free” tests of questionable quality. For example, WeschlerTest.com offers “free sample tests,” but the bottom of the website notes that, “Disclaimer: This is not an official Wechsler test and is only for entertainment purposes. Any scores derived from it may not accurately reflect the score you would attain on an official Wechsler test.” Here is a similar wesbsite that offers free Stanford-Binet Tests.
I am not personally in a position to judge if these are any good.
It looks like the US military has put its Armed Services Vocational Aptitude Battery online, or at least a practice version. This seems like one of the best free options, because the army is a real organization that’s deeply interested in getting accurate results and the relationship between the ASVAB and other IQ tests is probably well documented. From the website:
The ASVAB is a timed test that measures your skills in a number of different areas. You complete questions that reveal your skills in paragraph comprehension, word knowledge, arithmetic reasoning and mathematics knowledge. These are basic skills that you will need as a member of the U.S. military. The score you receive on the ASVAB is factored into your Armed Forces Qualifying Test (AFQT) score. This score is used to figure out whether you qualify to enlist in the armed services. …
The ASVAB was created in 1968. By 1976, all branches of the military began using this test. In 2002, the test underwent many revisions, but its main goal of gauging a person’s basic skills remained the same. Today, there is a computerized version of the test as well as a written version. The Department of Defense developed this test and it’s taken by students in thousands of schools across the country. It is also given at Military Entrance Processing Stations (MEPS).
Naturally, each branch of the United States armed services wants to enlist the best, most qualified candidates each year. The ASVAB is a tool that helps in the achievement of that purpose. Preparing to take the ASVAB is just one more step in the journey toward your goal of joining the U.S. armed services. …
Disclaimer: The tests on this website are for entertainment purposes only, and may not accurately reflect the scores you would attain on a professionally administered ASVAB test.
Drawing a page from Pumpkin Person’s book, I recommend taking several different tests and then comparing results. Use your good judgment about whether a particular test seems reliable–is it covered in ads? Does random guessing get you a score of 148? Did you get a result similar to what you’d expect based on real life experiences?
2. Besides that I wanted to ask you how much social class and IQ are correlated?
A fair amount.
With thanks to Pumpkin Person
Really dumb people are too dumb to commit as much crime as mildly dumb people
When dumb children are born to rich people, they tend to do badly in life and don’t make much money; they subsequently sink in social status. When smart children are born to poor people, they tend to do well in life and rise in social status. Even in societies with strict social classes where moving from class to class is effectively impossible, we should still expect that really dumb people born into wealth will squander it, leading to their impoverishment. Likewise, among the lower classes, we would still expect that smarter low-class people would do better in life than dumber ones.
This is all somewhat built into the entire definition of “IQ” and what people were trying to measure when they created the tests.
3. Basically do traditional upper classes form separate genetic clusters like Gregory Clark claims?
I haven’t read Clark’s book, but I’m sure the pathetic amount of research I can do here would be nothing compared to what he’s amassed.
A similar pattern of spousal association for IQ scores and personality traits was found in two British samples from Oxford and Cambridge. There was no indirect evidence from either sample to suggest that convergence occurred during marriage. All observed assortative mating might well be due to initial assortment.
This article reviews the literature on assortative mating for psychological traits and psychiatric illness. Assortative mating appears to exist for personality traits, but to a lesser degree than that observed for physical traits, sociodemographic traits, intelligence, and attitudes and values. Concordance between spouses for psychiatric illness has also been consistently reported in numerous studies. This article examines alternative explanations for such observed concordance and discusses the effects of assortative mating on population genetics and the social environment.
In the Minnesota Twin Family Study, assortative mating for IQ was greater than .3 in both the 11- and 17-year-old cohorts. Recognizing this, genetic variance in IQ independent of SES was greater with higher parental SES in the 11-year-old cohort. This was not true, however, in the 17-year-old cohort. In both cohorts, people of higher IQ were more likely to have ‘married down’ for IQ than people of lower IQ were to have ‘married up’. This assortative mating pattern would create greater genetic diversity for IQ in people of higher IQ than in people of lower IQ. As IQ is associated with SES, the pattern could be one reason for the observation of greater genetic variance in IQ independent of SES with greater parental SES in several samples. If so, it could block upward social mobility among those already in lower-SES groups. I discuss possible involved mechanisms and social implications.
Assortative mating is the individuals’ tendency to mate with those who are similar to them in some variables, at a higher rate than would be expected from random. This study aims to provide empirical evidence of assortative mating through the Big Five model of personality and two measures of intelligence using Spanish samples. The sample consisted of 244 Spanish couples. It was divided into two groups according to relationship time. The effect of age, educational level and socioeconomic status was controlled. The results showed strong assortative mating for intelligence and moderate for personality. The strongest correlations for Personality were found in Openness, Agreeableness and Conscientiousness.
The role of personal preference as an active process in mate selection is contrasted with the more passive results of limitations of available mates due to social, educational, and geographical propinquity. The role of personal preference estimated after removing the effects of variables representing propinquity was still significant for IQ and Eysenck’s extraversion-introversion and inconsistency (lie) scales, even though small.
Some argue that the high heritability of IQ renders purely environmental explanations for large IQ differences between groups implausible. Yet, large environmentally induced IQ gains between generations suggest an important role for environment in shaping IQ. The authors present a formal model of the process determining IQ in which people’s IQs are affected by both environment and genes, but in which their environments are matched to their IQs. The authors show how such a model allows very large effects for environment, even incorporating the highest estimates of heritability. Besides resolving the paradox, the authors show that the model can account for a number of other phenomena, some of which are anomalous when viewed from the standard perspective.
4. Are upper class people genetically more intelligent? Or is there an effect of regression to the mean and all classes have about equal chances to spawn high IQ people?”
…James Lee, a real expert in the field, sent me a current best estimate for the probability distribution of offspring IQ as a function of parental midpoint (average between the parents’ IQs). James is finishing his Ph.D. at Harvard under Steve Pinker — you might have seen his review of R. Nesbitt’s book Intelligence and how to get it: Why schools and cultures count.
The results are stated further below. Once you plug in the numbers, you get (roughly) the following:
Assuming parental midpoint of n SD above the population average, the kids’ IQ will be normally distributed about a mean which is around +.6n with residual SD of about 12 points. (The .6 could actually be anywhere in the range (.5, .7), but the SD doesn’t vary much from choice of empirical inputs.)…
Read Hsu’s post for the rest of the details.
In short, while regression to the mean works for everyone, different people regress to different means depending on how smart their particular ancestors were. For example, if two people of IQ 100 have a kid with an IQ of 140, (Kid A) and two people of IQ 120 have a kid of IQ 140, (Kid B), Kid A’s own kids are likely to regress toward 100, while Kid B’s kids are likely to regress toward 120.
We can look at the effects of parental SES on SAT Scores and the like:
Personally, I know plenty of extremely intelligent people who come from low-SES backgrounds, but few of them ended up low-SES. Overall, I’d expect highly intelligent people to move up in status and less intelligent people to move down over time, with the upper class thus sort of “collecting” high-IQ people, but there are obviously regional and cultural effects that may make it inappropriate to compare across groups.
Apropos Friday’s conversation about the transition from hunting to pastoralism and the different strategies hunters employ in different environments, I got to thinking about how these different food-production systems could influence the development of different “intelligences,” or at least mental processes that underlie intelligence.
Ingold explains that in warm climes, hunter-gatherers have many food resources they can exploit, and if one resource starts running low, they can fairly easily switch to another. If there aren’t enough yams around, you can eat melons; if not enough melons, squirrels; if no squirrels, eggs. I recall a study of Australian Aborigines who agreed to go back to hunter-gatherering for a while after living in town for several decades. Among other things (like increased health,) scientists noted that the Aborigines increased the number of different kinds of foods they consumed from, IIRC, about 40 per week to 100.
By contrast, hunters in the arctic are highly dependent on exploiting only a few resources–fish, seals, reindeer, and perhaps a few polar bears and foxes. Ingold claims that there are (were) tribes that depended largely on only a few major hunts of migrating animals (netting hundreds of kills) to supply themselves for the whole year.
If those migrating change their course by even a few miles, it’s easy to see how the hunters could miss the herds entirely and, with no other major species around to exploit, starve over the winter.
Let’s consider temperate agriculture as well: the agriculturalist can store food better than the arctic hunter (seal meat does not do good things in the summer,) but lacks the tropical hunter-gatherer’s flexibility; he must stick to his fields and keep working, day in and day out, for a good nine months in a row. Agricultural work is more flexible than assembly line work, where your every minute is dictated by the needs of the factory, but a farmer can’t just wander away from his crops to go hunt for a months just because he feels like it, nor can he hope to make up for a bad wheat harvest by wandering into his neighbor’s fields and picking their potatoes.
Which got me thinking: clearly different people are going to do better at different systems.
But first, what is intelligence? Obviously we could define it in a variety of ways, but let’s stick to reasonable definitions, eg, the ability to use your brain to achieve success, or the ability to get good grades on your report card.
A variety of mental traits contribute to “intelligence,” such as:
The ability to learn lots of information. Information is really useful, both in life and on tests, and smarter brains tend to be better at storing lots and lots of data.
Flexible thinking. This is the ability to draw connections between different things you’ve learned, to be creative, to think up new ideas, etc.
Some form of Drive, Self Will, or long-term planning–that is, the ability to plan for your future and then push yourself to accomplish your goals. (These might more properly be two different traits, but we’ll keep them together for now.)
Your stereotypical autistic, capable of memorizing large quantities of data but not doing much with them, has trait #1 but not 2 or 3.
Artists and musicians tend to have a lot of trait #2, but not necessarily 1 or 3 (though successful artists obviously have a ton of #3)
And an average kid who’s not that bright but works really hard, puts in extra hours of effort on their homework, does extra credit assignments, etc., has a surfeit of #3 but not much 2 or 1.
Anyway, it seems to me like the tropical hunting/gathering environment, with many different species to exploit, would select for flexible thinking–if one food isn’t working out, look for a different one. This may also apply to people from tropical farming/horticulturalist societies.
By contrast, temperate farming seems more likely to select for planning–you can’t just wander off or try to grow something new in time for winter if your first crop doesn’t work out.
Many people have noted that America’s traditionally tropical population (African Americans) seems to be particularly good at flexible thinking, leading to much innovation in arts and music. They are not as talented, though, at Drive, leading to particularly high highschool dropout rates.
America’s traditionally rice-farming population (Asians,) by contrast, has been noted for over a century for its particularly high drive and ability to plan for the future, but not so much for contributions to the arts. East Asian people are noted for their particularly high IQ/SAT/PISA scores, despite the fact that China lags behind the West in GDP and quality of life terms. (Japan, of course, is a fully developed country.) One potential explanation for this is that the Chinese, while very good at working extremely hard, aren’t as good at flexible thinking that would help spur innovation. (I note that the Japanese seem to do just fine at flexible thinking, but you know, the Japanese aren’t Chinese and Japan isn’t China.)
(I know I’m not really stating anything novel.) But the real question is:
What kind of mental traits might pastoralism, arctic pastoralism, or arctic hunting select for?
It’s been a slow week for comments, probably because everyone is still passed out/out of town/tired/sick/busy from all of the holiday revelry. Some of you are still celebrating. Still, I invite you all to come in, take a seat by the fire, pick up a warm mug of cocoa, and enjoy yourselves with some relaxing chat and mingle.
But the stone tools on Naxos appeared to be hewn by Paleolithic people — much more ancient humans, perhaps not members of our species at all.
Since 2013, Carter has co-directed a new round of investigations on Naxos. He and a handful of others working in the region have begun to furnish evidence that humans reached the islands of the Aegean Sea 250,000 years ago and maybe earlier. If those dates are confirmed, it means the first people there were Neanderthals, their probable ancestors, Homo heidelbergensis or maybe even Homo erectus. …
Other researchers insist that much better evidence needs to be discovered to attribute such complex behaviours to Neanderthals and other hominins …
Then, in 1988, archeologists began excavating a collapsed rock shelter on the southern shore of Cyprus. They found about 1,000 bladelets and small tools typically associated with pre-Neolithic people.
“There was a lot of skepticism at first,” said Alan Simmons, an anthropologist at the University of Nevada Las Vegas who was involved in the work. “But once we had all the radiocarbon dates, it came to be accepted.”
The site pushed the peopling of Cyprus back to 12,000 years ago — only a few millennia, but enough to break the Neolithic barrier and establish the presence of hunter-gatherers. Today, the distance to mainland Turkey is about 75 kilometres. Sea levels have fluctuated and the crossing was once shorter, but Cyprus has always been an island.
The discoveries on Cyprus overturned the idea that hunter-gatherers were incapable or unwilling to travel by sea. But the debate was still confined to the activities of our species, Homo sapiens.
In 2008, a Greek-American team of archeologists began searching on the southwest coast of Crete for pre-Neolithic artifacts. They found many from roughly the same era as those on Cyprus. But they also found rough quartz hand axes and cleavers that appeared to be much more ancient.
The team discovered artifacts eroding out of a layer of soil that dated to at least 130,000 years ago, and the tools themselves looked like those archeologists associate with archaic hominin sites on the mainland — ones that are at least 250,000 years old. …
“only about one-quarter of one percent (0.25 percent) of all whites will be violently victimized by a black person this year”
This would mean that its 2,5% every 10 years. A typical american white lives 80 years this would mean their lifelong chance of getting attacked by a black is 20%(!!!) exactly the same number they argue the chance of a women is to be raped in life. The same Tim Wise made a big deal of how high that is. Of course he takes anual number for other crime and lifelong numbers for rape.
Kanazawa (2014), reviewed the data on the research between obesity and IQ. What he found was that those studies that concluded that obesity causes lowered intelligence only observed cross-sectional studies. Longitudinal studies that looked into the link between obesity and intelligence found that those who had low IQs since childhood then became obese later in life and that obesity does not lead to low IQ. … He states that those with IQs below 74 gained 5.19 BMI points, whereas those with IQs over above 126 gained 3.73 BMI points in 22 years, which is a statistically significant difference. Also noted, was that those at age 7 who had IQs above 125 had a 13.5 percent chance of being obese at age 51, whereas those with IQs below 74 at age 7 had a 31.9 percent chance of being obese.
Thanks everyone, and keep up the good work/great comments!
To summarize, our current generous welfare system is making it increasingly difficult for hard working members of society to afford to have children. Lazy and incapable people meanwhile are continuing to have children without restriction, courtesy of those hard working people. Its more than likely that average intelligence is falling as a result of these pressures.
Ever since someone proposed the idea of eguenic (ie, good) breeding, people have been concerned by the possibility of dysgenic (bad) breeding. If traits are heritable (as, indeed, they are,) then you can breed for more of that trait or less of that trait. Anyone who has ever raised livestock or puppies knows as much–the past 10,000 years of animal husbandry have been devoted to producing superior stock, long before anyone knew anything about “genes.”
Historically–that is, before 1900–the world was harsh and survival far from guaranteed. Infant and childhood mortality were high, women often died in childbirth, famines were frequent, land (in Europe) was scarce, and warfare + polygamy probably prevented the majority of men from ever reproducing. In those days, at least in Western Europe, the upper classes tended to have more (surviving) children than the lower classes, leading to a gradual replacement of the lower classes.
The situation today is, obviously, radically different. Diseases–genetic or pathogenic–kill far fewer people. We can cure Bubonic Plague with penicillin, have wiped out Smallpox, and can perform heart surgery on newborns whose hearts were improperly formed. Welfare prevents people from starving in the streets and the post-WWII prosperity led to an unprecedented percent of men marrying and raising families. (The percent of women who married and raised families probably didn’t change that much.)
All of these pleasant events raise concerns that, long-term, prosperity could result in the survival of people whose immune systems are weak, carry rare but debilitating genetic mutations, or are just plain dumb.
So how is Western fertility? Are the dumb outbreeding the smart, or should we be grateful that the “gender studies” sorts are selecting themselves out of the population? And with negative fertility rates + unprecedented levels of immigration, how smart are our immigrants (and their children?)
Data on these questions is not the easiest to find. Jayman has data on African American fertility (dysgenic,) but white American fertility may be currently eugenic (after several decades of dysgenics.) Jayman also notes a peculiar gender difference in these trends: female fertility is strongly dysgenic, while male is eugenic (for both whites and blacks). Given that historically, about 80% of women reproduced vs. only 40% of males, I think it likely that this pattern has always been true: women only want to marry intelligent, high-performing males, while males are okay with marrying dumb women. (Note: the female ability to detect intelligence may be broken by modern society.)
Counter-Currents has a review of Lynn’s Dysgenics with some less hopeful statistics, like an estimation that Greece lost 5 IQ points during the Baby Boom, which would account for their current economic woes. (Overall, I think the Baby Boom had some definite negative effects on the gene pool that are now working their way out.)
Richwine estimates the IQ of our immigrant Hispanic-American population at 89.2, with a slight increase for second and third-generation kids raised here. Since the average American IQ is 98 and Hispanics are our fastest-growing ethnic group, this is strongly dysgenic. (The rest of our immigrants, from countries like China, are likely to be higher-IQ than Americans.) However, since Hispanic labor is typically used to avoid African American (reported 85 average IQ) labor, the replacement of African Americans with Mexicans is locally eugenic–hence the demand for Hispanic labor.
Without better data, none of this conclusively proves whether fertility in the West is currently eugenic or dysgenic, but I can propose three main factors that should be watched for their potentially negative effects:
Welfare–I suspect the greater black reliance on welfare may be diving black dysgenics, but some other factor like crime could actually be at play.
I’m going to focus on the last one because it’s the only one that hasn’t already been explained in great detail elsewhere.
For American women, childbearing is low-class and isolating.
For all our fancy talk about maternity leave, supporting working moms, etc., America is not a child-friendly place. Society frowns on loud, rambunctious children running around in public, and don’t get me started on how public schools deal with boys. Just try to find something entertaining for both kids and grown-ups that doesn’t cost an arm and a leg for larger families–admission to the local zoo for my family costs over $50 and requires over an hour, round trip, of driving. (And it isn’t even a very good zoo.) Now try to find an activity your childless friends would also like to do with you.
Young women are constantly told that getting pregnant will ruin their lives (most vocally by their own parents,) and that if they want to stay home and raise children, they are social parasites. (Yes, literally.) We see child-rearing, like tomato picking, as a task best performed by low-wage immigrant daycare workers.
I am reminded here of a mom’s essay I read about the difference in attitudes toward children in the US and Israel, the only Western nation with a positive native fertility rate. Israel, as she put it, is a place where children are valued and “kids can be kids.” I’ve never been to Israel, so I’ll just have to trust her:
How Israelis love kids, anyone’s kids. The country is a free-for-all for the youngest set, something I truly appreciated only once I started bringing my own children there. When I was a teenager visiting Israel from the States, I noticed how people there just don’t allow a child to cry. One pout, one sob, and out comes candy, trinkets and eager smiles to turn a kid around. That would never happen back home—a stranger give a child candy?!—but in Israel, in a nation that still harbors a post-Holocaust mentality, there is no reason that a Jewish child should ever cry again, if someone can help it.
Incidentally, if you qualify under Israeli health care law, you can get a free, state-funded abortion. Abortion doesn’t appear to have destroyed Israel’s fertility.
Since male fertility is (probably) already eugenic, then the obvious place to focus is female fertility: make your country a place where children are actively valued and intelligent women are encouraged instead of insulted for wanting them, and–hopefully–things can improve.
I do not believe that IQ tests measure intelligence. Rather I believe that they measure a combination of intelligence, learning and concentration at a particular point in time. …
You may wish to read the whole thing there.
The short response is that I basically agree with the bit quoted, and I suspect that virtually everyone who takes IQ tests seriously does as well. We all know that if you come into an IQ test hungover, sick, and desperately needing to pee, you’ll do worse than if you’re well-rested, well-fed, and feeling fine.
That time I fell asleep during finals?
Not so good.
Folks who study IQ for a living, like the famous Flynn, believe that environmental effects like the elimination of leaded gasoline and general improvements in nutrition have raised average IQ scores over the past century or two. (Which I agree seems pretty likely.)
The ability to sit still and concentrate is especially variable in small children–little boys are especially notorious for preferring to run and play instead of sit at a desk and solve problems. And while real IQ tests (as opposed to the SAT) have been designed not to hinge on whether or not a student has learned a particular word or fact, the effects of environmental “enrichment” such as better schools or high-IQ adoptive parents do show up in children’s test scores–but fade away as children grow up.
There’s a very sensible reason for this. I am reminded here of an experiment I read about some years ago: infants (probably about one year old) were divided into two groups, and one group was taught how to climb the stairs. Six months later, the special-instruction group was still better at stair-climbing than the no-instruction group. But two years later, both groups of children were equally skilled at stair-climbing.
There is only so good anyone will ever get at stair-climbing, after all, and after two years of practice, everyone is about equally talented.
The sensible conclusion is that we should never evaluate an entire person based on just one IQ test result (especially in childhood.)
The mistake some people (not Chuancey Tinker) make is to jump from “IQ tests are not 100% reliable” to “IQ tests are meaningless.” Life is complicated, and people like to sort it into neat little packages. Friend or foe, right or wrong. And while single IQ test is insufficient to judge an entire person, the results of multiple IQ tests are fairly reliable–and if we aggregate our results over multiple people, we get even better results.
As with all data, more tests + more people => random incorrect data matters less.
I think the “IQ tests are meaningless” crowd is operating under the assumption that IQ scholars are actually dumb enough to blindly judge an entire person based on a single childhood test. (Dealing with this strawman becomes endlessly annoying.)
Like all data, the more the merrier:
So this complicated looking graph shows us the effects of different factors on IQ scores over time, using several different data sets (mostly twins studies.)
At 5 years old, “genetic” factors, (the diamond and thick lines) are less important than “shared environment.” Shared environment=parenting and teachers.
That is, at the age of 5, a pair of identical twins who were adopted by two different families will have IQ scores that look more like their adoptive parents’ IQ scores than their genetic relatives’ IQ scores. Like the babies taught to climb stairs before their peers, the kids whose parents have been working hard to teach them their ABCs score better than kids whose parents haven’t.
By the age of 7, however, this parenting effect has become less important than genetics. This means that those adopted kids are now starting to have IQ scores more similar to their biological relatives than to their adoptive relatives. Like the kids from the stair-climbing experiment, their scores are now more based on their genetic abilities (some kids have better balance and coordination, resulting in better stair-climbing) than on whatever their parents are doing with them.
By the age of 12, the effects of parenting drop to around 0. At this point, it’s all up to the kid.
Of course, adoption studies are not perfect–adoptive parents are not randomly selected and have to go through various hoops to prove that they will be decent parents, and so tend not to be the kinds of people who lock their children in closets or refuse to feed them. I am sure this kind of parenting does terrible things to IQ, but there is no ethical way to design a randomized study to test them. Thankfully, the % of children subject to such abysmal parenting is very low. Within the normal range of parenting practices, parenting doesn’t appear to have much (if any) effect on adult IQ.
The point of all this is that what I think Chauncey means by “learning,” that is, advantages some students have over others because they’ve learned a particular fact or method before the others do, does appear to have an effect on childhood IQ scores, but this effect fades with age.
I think Pumpkin Person is fond of saying that life is the ultimate IQ test.
While we can probably all attest to a friend who is “smart but lazy,” or smart but interested in a field that doesn’t pay very well, like art or parenting, the correlation between IQ and life outcomes (eg, money) are amazingly solid:
The correlation even holds internationally:
Map of IQ by country. Source: Wikipedia.
There’s a simple reason why this correlation holds despite lazy and non-money-oriented smart people: there are also lazy and non-money-oriented dumb people, and lazy smart people tend to make more money and make better long-term financial decisions than lazy dumb people.
Note that none of these graphs are the result of a single test. A single test would, indeed, be useless.
More than 13 million pain-blocking epidural procedures are performed every year in the United States. Although epidurals are generally regarded as safe, there are complications in up to 10 percent of cases, in which the needles are inserted too far or placed in the wrong tissue.
A team of researchers from MIT and Massachusetts General Hospital hopes to improve those numbers with a new sensor that can be embedded into an epidural needle, helping anesthesia doctors guide the needle to the correct location.
Since inserting a giant needle into your spine is really freaky, but going through natural childbirth is hideously painful, I strongly support this kind of research.
More than half of Americans under the age of 25 who have a bachelor’s degree are either unemployed or underemployed. According to The Christian Science Monitor, nearly 1 percent of bartenders and 14 percent of parking lot attendants have a bachelor’s degree.
Adding additional degrees is no guarantee of employment either. According to a recent Urban Institute report, nearly 300,000 Americans with master’s degrees and over 30,000 with doctorates are on public relief. …
Unless you have a “hard” skill, such as a mastery of accounting, or a vocational certificates (e.g., in teaching) your liberal arts education generally will not equip you with the skill set that an employer will need.
Obviously colleges still do some good things. Much of the research I cite here in this blog originated at a college of some sort. And of course, if you are careful and forward thinking, you can use college to obtain useful skills/information.
But between the years, money, and effort students spend, not to mention the absurd political indoctrination, college is probably a net negative for most students.
A few doctors in the 1400s probably saved the lives of their patients, but far more killed them.
Okay, so this is just me thinking (and mathing) out loud. Suppose we have two different groups (A and B) of 100 people each (arbitrary number chosen for ease of dividing.) In Group A, people are lumped into 5 large “clans” of 20 people each. In Group B, people are lumped in 20 small clans of 5 people each.
Each society has an average IQ of 100–ten people with 80IQs, ten people with 120IQs, and eighty people with 100IQs. I assume that there is slight but not absolute assortative mating, so that most high-IQ and low-IQ people end up marrying someone average.
100/100 100/80 100/120 80/80 120/120 (IQ)
30 9 9 1 1 (couples)
Okay, so there should be thirty couples where both partners have 100IQs, nine 100/80IQ couples, nine 100/120IQ couples, one 80/80IQ couple, and one 120/120IQ couple.
If each couple has 2 kids, distributed thusly:
100/100=> 10% 80, 10% 120, and 80% 100
120/120=> 100% 120
80/80 => 100% 80
120/100=> 100% 110
80/100 => 100% 90
Then we’ll end up with eight 80IQ kids, eighteen 90IQ, forty-eight 100IQ, eighteen 110 IQ, and 8 120IQ.
So, under pretty much perfect and totally arbitrary conditions that probably only vaguely approximate how genetics actually works (also, we are ignoring the influence of random chance on the grounds that it is random and therefore evens out over the long-term,) our population approaches a normal bell-curved IQ distribution.
Not bad for a very, very rough model that is trying to keep the math very simple so I can write it blog post window instead of paper, though clearly 6 children have gotten lost somewhere. (rounding error???)
Anyway, now let’s assume that we don’t have a 2-child policy in place, but that being smart (or dumb) does something to your reproductive chances.
In the simplest model, people with 80IQs have zero children, 90s have one child, 100s have 2 children, 110s have 3 children, and 120s have 4 children.
oh god but the couples are crossed so do I take the average or the top IQ? I guess I’ll take average.
100/100 100/80 100/120 80/80 120/120 (IQ)
30 9 9 1 1 (couples)
60 kids 9 kids 27 kids 0 4 kids
6, 48, 6
So our new distribution is six 80IQ, nine 90IQ, forty-eight 100IQ, twenty-seven 110IQ, and ten 120IQ.
(checks math oh good it adds up to 100.)
We’re not going to run gen three, as obviously the trend will continue.
Let’s go back to our original clans. Society A has 5 clans of 20 people each; Society B has 20 clans of 5 people each.
With 10 high-IQ and 10 low-IQ people per society, each clan in A is likely to have 2 smart and 2 dumb people. Each clan in B, by contrast, is likely to have only 1 smart or 1 dumb person. For our model, each clan will be the reproductive unit rather than each couple, and we’ll take the average IQ of each clan.
Society A: 5 clans with average of 100 IQ => social stasis.
Society B: 20 clans, 10 with average of 96, 10 with average of 106. Not a big difference, but if the 106s have even just a few more children over the generations than the 96s, they will gradually increase as a % of the population.
Of course, over the generations, a few of our 5-person clans will get two smart people (average IQ 108), a dumb and a smart (average 100), and two dumb (92.) The 108 clans will do very well for themselves, and the 92 clans will do very badly.
If society functions so that smart people have more offspring than dumb people (definitely not a given in the real world,) then: In society A, everyone benefits from the smart people, whose brains uplift their entire extended families (large clans.) This helps everyone, especially the least capable, who otherwise could not have provided for themselves. However, the average IQ in society A doesn’t move much, because you are likely to have equal numbers of dumb and smart people in each family, balancing each other out. In Society B, the smart people are still helping their families, but since their families are smaller, random chance dictates that they are less likely to have a dumb person in their families. The families with the misfortune to have a dumb member suffer and have fewer children as a result; the families with the good fortune to have a smart member benefit and have more children as a result. Society B has more suffering, but also evolves to have a higher average IQ. Society A has less suffering, but its IQ does not change. Obviously this a thought experiment and should not be taken as proof of anything about real world genetics. But my suspicion is that this is basically the mechanism behind the evolution of high-IQ in areas with long histories of nuclear, atomized families, and the mechanism suppressing IQ in areas with strongly tribal norms. (See HBD Chick for everything family structure related.)
By the way, guys, I have not been able to write as much as I would like to, lately, so I am dropping the Wed. post and only going to be updating 4 times a week. Hopefully I’ll get more time soon. 🙂
It is very easy to dismiss Appalachia’s problems by waving a hand and saying, “West Virginia has an average IQ of 98.”
But there are a hell of a lot of states that have average IQs lower than West Virginia, but are still doing better. For that matter, France has a lower average IQ, and France is still doing pretty well for itself.
So we’re going to discuss some alternative theories.
(And my apologies to WV for using it as a stand-in for the entirety of Greater Appalachia, which, as discussed a few days ago, includes parts of a great number of states, from southern Pennsylvania to eastern Texas. Unfortunately for me, only WV, Kentucky, and Tennessee fall entirely within Greater Appalachia, and since it is much easier to find data aggregated by state than by county or “cultural region,” I’ve been dependent on these states for much of my research.)
At any rate, it’s no secret that Appalachia is not doing all that well:
The Death of Manufacturing
Having your local industries decimated by foreign competition and workforces laid off due to automation does bad things to your economy. These things look great on paper, where increasing efficiency and specialization result in higher profits for factory owners, but tend to work out very badly for the folks who have lost their jobs.
Indeed, the US has barely even begun thinking about how we plan on dealing with the effects of continued automation. Do 90% of people simply become irrelevant as robots take over their jobs? Neither “welfare for everyone” nor “everybody starves” seem like viable solutions. So far, most politicians have defaulted to platitudes about how “more education” will be the solution to all our woes, but how you turn a 45-year old low-IQ meat packer who just got replaced by a robot into a functional member of the “information economy” remains to be seen.
Of course, economic downturns happen; fads come and go; industries go in and out. The Rust Belt, according to Wikipedia, runs north of Greater Appalachia, through Pennsylvania, New York, northern Ohio, Detroit, etc. These areas have been struggling for decades, but many of them, like Pittsburgh, are starting to recover. Appalachia, by contrast, is still struggling.
This may just be a side effect of Appalachia being more rural; Pittsburgh is a large city with millions of people employed in a variety of industries. If one goes out, others can, hopefully, replace it. But in a rural area with only one or two large employers–sometimes literal “company towns” built near mines–if the main industry goes out, you may not get anything coming back in.
Appalachia has geography that makes it difficult to transport goods in and out as cheaply as you can transport them elsewhere, but then, so does Switzerland, and Switzerland seems to be doing pretty well. (Of course, Switzerland seems to have specialized in small, expensive, easy to transport luxury goods like watches, chocolate, and bank deposits, while Appalachia has specialized in cheap, heavy, unpleasant to produce goods like coal.)
But I am being over-generous: America killed its manufacturing.
We killed it because our upper classes look down their noses at manufacturing; such jobs are unpleasant and low-class, and therefore they cannot understand that for some people, these jobs are the only thing standing between them and poverty. Despite the occasional protest against outsourcing, our government–Republicans and Democrats–has forged ahead with its free-trade, send-everything-to-China-and-fire-the-Americans, import-Mexicans-and-fire-the-Americans, and then reap-the-profits agenda.
Too Much Regulation
Over-regulation begins with the best of intentions, then breaks your industries. Nobody wants to die in a fire or a cave-in, but you can’t regulate away all risk and still get anything done.
Every regulation, every record-keeping requirement, every mandated compliance, is a tax on efficiency–and thus on profits. Some regulation, of course, probably increases profits–for example, I am more likely to buy a medicine if I have some guarantee that it isn’t made with rat poison. But beyond that guarantee, increasing requirements that companies test all of their products for toxins imposes more costs than the companies recoup–at which point, companies tend to leave for more profitable climes.
Likewise, while health insurance sounds great, running it through employers is madness. Companies should devote their efforts to making products (or services,) not hiring expensive lawyers and accountants to work through the intricacies of health care law compliance and income withholding.
The few manufacturers left in Appalachia (and probably elsewhere in the country) have adopted a creative policy to avoid paying health insurance costs for their workers: fire everyone just before they qualify for insurance. By hiring only temp workers, outsourcing everything, and only letting employees bill 20 hours a week, manufacturers avoid complying with employee-related regulations.
Oh, sure, you might think you could just get two 20-hour a week jobs, but that requires being able to schedule two different jobs. When you have no idea whether you are going to be working every day or not until you show up for work at 7 AM, and you’ll get fired if you don’t show up, getting a second job simply isn’t an option.
I have been talking about over-regulation for over a decade, but it is the sort of issue that it is difficult to get people worked up over, much less make them understand if they haven’t lived it. Democrats just look aghast that anyone would suggest that more regulations won’t lead automatically to more goodness, and Republicans favor whichever policies lead to higher profits, without any concern for the needs of workers.
New York Times columnist Thomas L. Friedman recently encapsulated this view in a piece called “Start-Ups, Not Bailouts.” His argument: Let tired old companies that do commodity manufacturing die if they have to. If Washington really wants to create jobs, he wrote, it should back startups.
Friedman is wrong. Startups are a wonderful thing, but they cannot by themselves increase tech employment. Equally important is what comes after that mythical moment of creation in the garage, as technology goes from prototype to mass production. This is the phase where companies scale up. They work out design details, figure out how to make things affordably, build factories, and hire people by the thousands. Scaling is hard work but necessary to make innovation matter.
The scaling process is no longer happening in the U.S. And as long as that’s the case, plowing capital into young companies that build their factories elsewhere will continue to yield a bad return in terms of American jobs. …
As time passed, wages and health-care costs rose in the U.S. China opened up. American companies discovered that they could have their manufacturing and even their engineering done more cheaply overseas. When they did so, margins improved. Management was happy, and so were stockholders. Growth continued, even more profitably. But the job machine began sputtering.
The 10X Factor
Today, manufacturing employment in the U.S. computer industry is about 166,000, lower than it was before the first PC, the MITS Altair 2800, was assembled in 1975 (figure-B). Meanwhile, a very effective computer manufacturing industry has emerged in Asia, employing about 1.5 million workers—factory employees, engineers, and managers. The largest of these companies is Hon Hai Precision Industry, also known as Foxconn. The company has grown at an astounding rate, first in Taiwan and later in China. Its revenues last year were $62 billion, larger than Apple (AAPL), Microsoft (MSFT), Dell (DELL), or Intel. Foxconn employs over 800,000 people, more than the combined worldwide head count of Apple, Dell, Microsoft, Hewlett-Packard (HPQ), Intel, and Sony (SNE) (figure-C).
Companies don’t scale up in the US because dealing with the regulations is monstrous. Anyone who has worked in industry can tell you this; heck, even Kim Levine, author of Millionaire Mommy (don’t laugh at the title, it’s actually a pretty good book,) touches on the subject. Levine notes that early in the process of scaling up the manufacture of her microwavable pillows, she had dreams of owning her own little factory, but once she learned about all of the regulations she would have to comply with, she decided that would be a horrible nightmare.
I don’t have time to go into more detail on the subject, but here is a related post from Slate Star Codex:
I started the book with the question: what exactly do real estate developers do? …
As best I can tell, the developer’s job is coordination. This often means blatant lies. The usual process goes like this: the bank would be happy to lend you the money as long as you have guaranteed renters. The renters would be happy to sign up as long as you show them a design. The architect would be happy to design the building as long as you tell them what the government’s allowing. The government would be happy to give you your permit as long as you have a construction company lined up. And the construction company would be happy to sign on with you as long as you have the money from the bank in your pocket. Or some kind of complicated multi-step catch-22 like that. The solution – or at least Trump’s solution – is to tell everybody that all the other players have agreed and the deal is completely done except for their signature. The trick is to lie to the right people in the right order, so that by the time somebody checks to see whether they’ve been conned, you actually do have the signatures you told them that you had. The whole thing sounds very stressful.
The developer’s other job is dealing with regulations. The way Trump tells it, there are so many regulations on development in New York City in particular and America in general that erecting anything larger than a folding chair requires the full resources of a multibillion dollar company and half the law firms in Manhattan. Once the government grants approval it’s likely to add on new conditions when you’re halfway done building the skyscraper, insist on bizarre provisions that gain it nothing but completely ruin your chance of making a profit, or just stonewall you for the heck of it if you didn’t donate to the right people’s campaigns last year. Reading about the system makes me both grateful and astonished that any structures have ever been erected in the United States at all, and somewhat worried that if anything ever happens to Donald Trump and a few of his close friends, the country will lose the ability to legally construct artificial shelter and we will all have to go back to living in caves.
The current socio-economic system is designed by rootless, soulless, high-IQ, low-time preference, money-/status-grubbing homo economicus for benefit of those same homo economicus. It is a system for designed for intelligent sociopaths. Those who are rootless with high-IQ and low-time preference can succeed rather well in this system, but it destroys those who need rootedness or those who are who are low-IQ or high time preference.
Kevin says, “Nothing happened to them. There wasn’t some awful disaster.” But he’s wrong, there was a disaster, but no just one, multiple related disasters all occurring simultaneously. …
Every support the white working class (and for that matter the black working class) had vanished within less than a generation. There was a concerted effort to destroy these supports, and this effort succeeded. Through minimal fault of their own the white working class was left with nothing holding them up.
Personally, I lack good first-hand insight into working class cultural matters; I have no idea how much Hollywood mores have penetrated and changed people’s practical lives in rural Arkansas. I must defer, there, to people more knowledgeable than myself.
While death rates have been falling for the rest of the developed world and for America’s blacks and Hispanics, death rates have been rising over the past couple of decades for American whites–middle aged and younger white women, to be exact. They’re up pretty much everywhere, but Appalachia has been the hardest hit.
The first thing everyone seems to cite in response is meth. And indeed, it appears that there is a lot of meth in Appalachia (and a lot of other places):
But I don’t think this explains why death rates are headed up among women. Maybe I’m wrong, (I know rather little about drug use patterns,) but it doesn’t seem like women would be more likely to OD on meth than men. If anything, I get the impression that illegal drugs that fuck you up and kill you are more of a guy thing than a gal thing. Men are probably far more likely to die of alcohol-related causes like drunk driving and cirrhosis of the liver than women, for example, and you don’t even have to deal with criminals to get alcohol.
So, while I agree that drugs appear to be a rising problem, I don’t think they are the problem. (And even still, drug overdoses only beg the deeper question of why more people are using drugs.)
As I mentioned a few posts ago, SpottedToad ran the death rate data by county and came up with three significant correlations: poverty, obesity, and disability. (I don’t know if he looked at meth/drug use by county.)
I, for one, am not surprised to find out that disabled, overweight people are not in the best of health.
Here are SpottedToad’s graphs, showing the correlations he found–I recommend reading his entire post.
Obviously one possibility is that unemployed people feel stressed, binge on cheap crap, get sick, get SSDI, and then die.
But then why are death rates only going up for white women? Plenty of white men are unemployed; plenty of black men and women are poor, fat, and disabled.
Obviously there are a ton of possible confounders–perhaps poor people just happen to make bad life decisions that both make them poor and result in bad health, like smoking cigarettes. Perhaps poor people have worse access to health care, or perhaps being really sick makes people poor. Or maybe the high death rates just happen to be concentrated among people who happen to be fat for purely biological reasons–it appears that the British are among the fattest peoples in Europe, and the Scottish are fatter than the British average. (Before anyone gets their hackles up, I should also note that white Americans are slightly fatter than Scots.)
And as many people have noted, SSI/SSDI are welfare for people who wouldn’t otherwise qualify.
In my correspondence with an observing teacher in the hill country of western Pennsylvania, she reported that in her school a condition was frequent in the families, namely, that the children could not carry prescribed textbook work because of low mentality. This is often spoken of, though incorrectly, as delayed mentality. In one family of eight children only the first child was normal. The mental and physical injuries were increasingly severe. The eighth child had both hare-lip and a double cleft palate. The seventh child had cleft palate and the sixth was a near idiot. The second to fifth, inclusive, presented increasing degrees of disturbed mentality.
In my cabin-to-cabin studies of families living in the hill country of North Carolina, I found many cases of physical and mental injury. Among these cases arthritis and heart disease were very frequent, many individuals being bed ridden. A typical case is shown in the upper part of figure 148 [sorry, I can’t show you the picture, but it is not too important,] of a father and mother and their one child. The child is so badly injured that he is mentally an imbecile. They are living on very poor land where even the vegetable growth is scant and of poor quality. Their food consisted largely of corn bread, corn syrup, some fat pork, and strong coffee.
As the title of the book implies, Dr. Price’s thesis is that bad nutrition leads to physical degeneration. (Which, of course, it does.) He was working back when folks were just discovering vitamins and figuring out that diseases like curvy, pellagra, and beriberi were really nutritional deficiencies; figuring out the chemical compositions necessary for fertile soil; and before the widespread adoption of artificial fertilizers (possibly before their invention.) Dr. Price thought that American soils, particularly in areas that had been farmed for longer or had warmer, wetter weather, had lost much of their nutritional content:
My studies of this problem of reduced capacity of sols for maintaining animal life have included correspondence with the agricultural departments of all of the states of the union with regard to maintaining cattle. The reduction in capacity ranges from 20 to 90 per cent… I am advised that it would cost $50 an acre to replace the phosphorus alone that has been shipped off the land in large areas.
There is an important fact that we should consider; namely, the role that has been played by glaciers in grinding up and distributing rock formations. One glacier, the movement of which affected the surface soil of Ohio, covered only about half the state; namely, that area west of a line starting east of Cleveland and extending diagonally west across the state to Cincinnati. It is important for us to note that, in the areas extending south and east of this line, several expressions of degeneration are higher than in the areas north and west of this line. The infant mortality per thousand live births in 1939 is informative. In the counties north and west of that line, the death rate was from 40 49 per thousand live births; whereas, in the area south and east of that Line, the death rate was from 50 to 87.
It is of particular interest to us as dentists that studies show the percentage of teeth with caries to be much higher southeast of this line than northwest of it.
So I Googled around, and found this map of the last glaciation of Ohio:
Okay, I lied, it’s obviously a map of ACT scores. But it actually does look a lot like the glaciation map.
Australia’s soils, from what I understand, are particularly bad–because the continent’s rocks are so geologically old, the soil is extremely low in certain key nutrients, like iodine. Even with iodine supplementation, deficiencies are still occasionally a problem.
Many of the soils in the state are steeply sloping and tend to be shallow, acidic, and deficient in available phosphorus. As early as the late 19th century progressive farmers used rock phosphate, bone meal, and lime to increase crop yield and quality. Since the mid-20th century farmers have used soil tests and corrected mineral deficiencies. Most crop land and much of the pasture land are no longer severely deficient in essential nutrients. West Virginia has always been primarily a livestock producing state. Land on steep slopes is best suited to producing pasture and hay.
Nutritional deficiencies due to poor soil could have been a problem a century ago, just as Pellagra and hookworms were a problem, but they seem unlikely to be a big deal today, given both modern fertilizers and our habit of buying foods shipped in from California.
Looking at statistics from 2005 (the latest for which mortality rates are available) the researchers found that though coal mining brought in about $8 billion to the state coffers of Appalachian states, the costs of the shorter life-spans associated with coal mining operations were nearly $17 billion to $84.5 billion.
Coal mining areas in Appalachia were found to have nearly 11,000 more deaths each year than other places in the nation, with 2,300 of those attributable to environmental factors such as air and water pollution.
The Nation reports that:
In 2010, an explosion at the Upper Big Branch coal mine in southern West Virginia killed twenty-nine miners. Later that year, an explosion at a West Virginia chemical plant killed two workers and released toxic fumes into the surrounding areas. This past year, West Virginia led the nation in coal-mining deaths. …
One study found that residents of areas surrounding mountaintop-removal coal mines “had significantly higher mortality rates, total poverty rates and child poverty rates every year compared to other…counties.” Another study found that compared to residents of other areas in the state, residents of the state’s coal-mining regions were 70 percent more likely to suffer from kidney disease, over 60 percent more likely to develop obstructive lung diseases such as emphysema and 30 percent likelier to have high blood pressure.
In 2014, the Elk River Chemical spill left 300,000 residents of West Virginia without potable water. Five months later, another spill happened at the same site, the fourth in five years. (The chemicals involved are used int he processing/washing of coal.)
Overloaded coal trucks are a perpetual menace on the narrow, winding roads of the Appalachian coalfields. From 2000 to 2004, there were more than seven hundred accidents involving coal trucks in Kentucky alone; fifty-three people died, and more than five hundred were injured. …
After the coal is washed, a slurry of impurities, coal dust, and chemical agents used in the process remains. This liquid waste, called “coal sludge” or “slurry,” is often injected into abandoned underground mines, a practice that can lead to groundwater contamination. … In public hearings, many coalfield residents have attributed their health problems to water wells polluted after the coal mining industry “disposes” its liquid waste by injecting coal slurry underground. The primary disposal practice for coal slurry is to store it in vast unlined lagoons or surface impoundments created near mountaintop-removal mines. Hundreds of these slurry impoundments are scattered across the Appalachian coalfields. Individual impoundments have been permitted to store billions of gallons of waste. … In 2000 a slurry impoundment operated by the Martin County Coal Company in Kentucky broke through into abandoned mineworks, out old mine portals, and into tributary streams of the Big Sandy River. More than 300 million gallons of coal slurry fouled the waterway for a hundred miles downriver.
So, living near a coal mine is probably bad for your health.
Concentration of Land
Wikipedia claims that land in Appalachia (or maybe it was just WV) is highly concentrated in just a few hands–one of those being the government, as much of Appalachia is national parks and forests and the like. Of course, this could be an effect rather than a cause of poverty.
Appalachia may be “isolated” and “rural,” but it’s quite close to a great many cities and universities. Parts of West Virginia are close enough to DC that people apparently commute between them.
In the early 1900s, so many people left Appalachia for the industrial cities of the Northeast and Midwest that U.S. Route 23 and Interstate 75 became known as the “Hillbilly Highway.” (BTW, I don’t think Appalachians like being called “hillbillies.”)
Compared to Appalachian areas in Arkansas or Oklahoma, West Virginia and Kentucky are particularly close to the industrial regions and coastal universities. As a result, they may have lost a far larger number of their brightest and most determined citizens.
While the Appalachian states don’t have particularly low IQs, their IQ curve seems likely to be narrower than other states’. West Virginia, for example, is only about 3% black, 1% Hispanic, and 0.5% Asian. MA, by contrast, is 9% black, 11% Hispanic, and 6% Asian. Blacks and Hispanics tend to score lower than average on IQ tests, and Asians tend to score higher, potentially giving MA more people scoring both above and below average, while WV may have more people scoring right around the middle of the distribution. With its brightest folks heading to universities outside the region, Appalachia may continue to struggle.
And finally, yes, maybe there is just something about the kinds of people who live in Appalachia that predispose them to certain ailments, like smoking or over-eating. Perhaps the kinds of people who end up working in coal mines are also the kinds of people who are predisposed to get cancer or use drugs. I don’t know enough people from the area to know either way.
To the people of Appalachia: I wish you health and happiness.