I do not believe that IQ tests measure intelligence. Rather I believe that they measure a combination of intelligence, learning and concentration at a particular point in time. …
You may wish to read the whole thing there.
The short response is that I basically agree with the bit quoted, and I suspect that virtually everyone who takes IQ tests seriously does as well. We all know that if you come into an IQ test hungover, sick, and desperately needing to pee, you’ll do worse than if you’re well-rested, well-fed, and feeling fine.
That time I fell asleep during finals?
Not so good.
Folks who study IQ for a living, like the famous Flynn, believe that environmental effects like the elimination of leaded gasoline and general improvements in nutrition have raised average IQ scores over the past century or two. (Which I agree seems pretty likely.)
The ability to sit still and concentrate is especially variable in small children–little boys are especially notorious for preferring to run and play instead of sit at a desk and solve problems. And while real IQ tests (as opposed to the SAT) have been designed not to hinge on whether or not a student has learned a particular word or fact, the effects of environmental “enrichment” such as better schools or high-IQ adoptive parents do show up in children’s test scores–but fade away as children grow up.
There’s a very sensible reason for this. I am reminded here of an experiment I read about some years ago: infants (probably about one year old) were divided into two groups, and one group was taught how to climb the stairs. Six months later, the special-instruction group was still better at stair-climbing than the no-instruction group. But two years later, both groups of children were equally skilled at stair-climbing.
There is only so good anyone will ever get at stair-climbing, after all, and after two years of practice, everyone is about equally talented.
The sensible conclusion is that we should never evaluate an entire person based on just one IQ test result (especially in childhood.)
The mistake some people (not Chuancey Tinker) make is to jump from “IQ tests are not 100% reliable” to “IQ tests are meaningless.” Life is complicated, and people like to sort it into neat little packages. Friend or foe, right or wrong. And while single IQ test is insufficient to judge an entire person, the results of multiple IQ tests are fairly reliable–and if we aggregate our results over multiple people, we get even better results.
As with all data, more tests + more people => random incorrect data matters less.
I think the “IQ tests are meaningless” crowd is operating under the assumption that IQ scholars are actually dumb enough to blindly judge an entire person based on a single childhood test. (Dealing with this strawman becomes endlessly annoying.)
Like all data, the more the merrier:
So this complicated looking graph shows us the effects of different factors on IQ scores over time, using several different data sets (mostly twins studies.)
At 5 years old, “genetic” factors, (the diamond and thick lines) are less important than “shared environment.” Shared environment=parenting and teachers.
That is, at the age of 5, a pair of identical twins who were adopted by two different families will have IQ scores that look more like their adoptive parents’ IQ scores than their genetic relatives’ IQ scores. Like the babies taught to climb stairs before their peers, the kids whose parents have been working hard to teach them their ABCs score better than kids whose parents haven’t.
By the age of 7, however, this parenting effect has become less important than genetics. This means that those adopted kids are now starting to have IQ scores more similar to their biological relatives than to their adoptive relatives. Like the kids from the stair-climbing experiment, their scores are now more based on their genetic abilities (some kids have better balance and coordination, resulting in better stair-climbing) than on whatever their parents are doing with them.
By the age of 12, the effects of parenting drop to around 0. At this point, it’s all up to the kid.
Of course, adoption studies are not perfect–adoptive parents are not randomly selected and have to go through various hoops to prove that they will be decent parents, and so tend not to be the kinds of people who lock their children in closets or refuse to feed them. I am sure this kind of parenting does terrible things to IQ, but there is no ethical way to design a randomized study to test them. Thankfully, the % of children subject to such abysmal parenting is very low. Within the normal range of parenting practices, parenting doesn’t appear to have much (if any) effect on adult IQ.
The point of all this is that what I think Chauncey means by “learning,” that is, advantages some students have over others because they’ve learned a particular fact or method before the others do, does appear to have an effect on childhood IQ scores, but this effect fades with age.
I think Pumpkin Person is fond of saying that life is the ultimate IQ test.
While we can probably all attest to a friend who is “smart but lazy,” or smart but interested in a field that doesn’t pay very well, like art or parenting, the correlation between IQ and life outcomes (eg, money) are amazingly solid:
The correlation even holds internationally:
Map of IQ by country. Source: Wikipedia.
There’s a simple reason why this correlation holds despite lazy and non-money-oriented smart people: there are also lazy and non-money-oriented dumb people, and lazy smart people tend to make more money and make better long-term financial decisions than lazy dumb people.
Note that none of these graphs are the result of a single test. A single test would, indeed, be useless.
More than 13 million pain-blocking epidural procedures are performed every year in the United States. Although epidurals are generally regarded as safe, there are complications in up to 10 percent of cases, in which the needles are inserted too far or placed in the wrong tissue.
A team of researchers from MIT and Massachusetts General Hospital hopes to improve those numbers with a new sensor that can be embedded into an epidural needle, helping anesthesia doctors guide the needle to the correct location.
Since inserting a giant needle into your spine is really freaky, but going through natural childbirth is hideously painful, I strongly support this kind of research.
More than half of Americans under the age of 25 who have a bachelor’s degree are either unemployed or underemployed. According to The Christian Science Monitor, nearly 1 percent of bartenders and 14 percent of parking lot attendants have a bachelor’s degree.
Adding additional degrees is no guarantee of employment either. According to a recent Urban Institute report, nearly 300,000 Americans with master’s degrees and over 30,000 with doctorates are on public relief. …
Unless you have a “hard” skill, such as a mastery of accounting, or a vocational certificates (e.g., in teaching) your liberal arts education generally will not equip you with the skill set that an employer will need.
Obviously colleges still do some good things. Much of the research I cite here in this blog originated at a college of some sort. And of course, if you are careful and forward thinking, you can use college to obtain useful skills/information.
But between the years, money, and effort students spend, not to mention the absurd political indoctrination, college is probably a net negative for most students.
A few doctors in the 1400s probably saved the lives of their patients, but far more killed them.
Okay, so this is just me thinking (and mathing) out loud. Suppose we have two different groups (A and B) of 100 people each (arbitrary number chosen for ease of dividing.) In Group A, people are lumped into 5 large “clans” of 20 people each. In Group B, people are lumped in 20 small clans of 5 people each.
Each society has an average IQ of 100–ten people with 80IQs, ten people with 120IQs, and eighty people with 100IQs. I assume that there is slight but not absolute assortative mating, so that most high-IQ and low-IQ people end up marrying someone average.
100/100 100/80 100/120 80/80 120/120 (IQ)
30 9 9 1 1 (couples)
Okay, so there should be thirty couples where both partners have 100IQs, nine 100/80IQ couples, nine 100/120IQ couples, one 80/80IQ couple, and one 120/120IQ couple.
If each couple has 2 kids, distributed thusly:
100/100=> 10% 80, 10% 120, and 80% 100
120/120=> 100% 120
80/80 => 100% 80
120/100=> 100% 110
80/100 => 100% 90
Then we’ll end up with eight 80IQ kids, eighteen 90IQ, forty-eight 100IQ, eighteen 110 IQ, and 8 120IQ.
So, under pretty much perfect and totally arbitrary conditions that probably only vaguely approximate how genetics actually works (also, we are ignoring the influence of random chance on the grounds that it is random and therefore evens out over the long-term,) our population approaches a normal bell-curved IQ distribution.
Not bad for a very, very rough model that is trying to keep the math very simple so I can write it blog post window instead of paper, though clearly 6 children have gotten lost somewhere. (rounding error???)
Anyway, now let’s assume that we don’t have a 2-child policy in place, but that being smart (or dumb) does something to your reproductive chances.
In the simplest model, people with 80IQs have zero children, 90s have one child, 100s have 2 children, 110s have 3 children, and 120s have 4 children.
oh god but the couples are crossed so do I take the average or the top IQ? I guess I’ll take average.
100/100 100/80 100/120 80/80 120/120 (IQ)
30 9 9 1 1 (couples)
60 kids 9 kids 27 kids 0 4 kids
6, 48, 6
So our new distribution is six 80IQ, nine 90IQ, forty-eight 100IQ, twenty-seven 110IQ, and ten 120IQ.
(checks math oh good it adds up to 100.)
We’re not going to run gen three, as obviously the trend will continue.
Let’s go back to our original clans. Society A has 5 clans of 20 people each; Society B has 20 clans of 5 people each.
With 10 high-IQ and 10 low-IQ people per society, each clan in A is likely to have 2 smart and 2 dumb people. Each clan in B, by contrast, is likely to have only 1 smart or 1 dumb person. For our model, each clan will be the reproductive unit rather than each couple, and we’ll take the average IQ of each clan.
Society A: 5 clans with average of 100 IQ => social stasis.
Society B: 20 clans, 10 with average of 96, 10 with average of 106. Not a big difference, but if the 106s have even just a few more children over the generations than the 96s, they will gradually increase as a % of the population.
Of course, over the generations, a few of our 5-person clans will get two smart people (average IQ 108), a dumb and a smart (average 100), and two dumb (92.) The 108 clans will do very well for themselves, and the 92 clans will do very badly.
If society functions so that smart people have more offspring than dumb people (definitely not a given in the real world,) then: In society A, everyone benefits from the smart people, whose brains uplift their entire extended families (large clans.) This helps everyone, especially the least capable, who otherwise could not have provided for themselves. However, the average IQ in society A doesn’t move much, because you are likely to have equal numbers of dumb and smart people in each family, balancing each other out. In Society B, the smart people are still helping their families, but since their families are smaller, random chance dictates that they are less likely to have a dumb person in their families. The families with the misfortune to have a dumb member suffer and have fewer children as a result; the families with the good fortune to have a smart member benefit and have more children as a result. Society B has more suffering, but also evolves to have a higher average IQ. Society A has less suffering, but its IQ does not change. Obviously this a thought experiment and should not be taken as proof of anything about real world genetics. But my suspicion is that this is basically the mechanism behind the evolution of high-IQ in areas with long histories of nuclear, atomized families, and the mechanism suppressing IQ in areas with strongly tribal norms. (See HBD Chick for everything family structure related.)
Finishing up with our discussion, (in response to a reader’s question):
Why are people snobs about intelligence?
Is math ability better than verbal?
Do people only care about intelligence in the context of making money?
Now, this is the point in the conversation where somebody tends to say something like, “My cousin / little sister /uncle is retarded, but they are still a beautiful, wonderful person and I love them as much as everyone else, and therefore it is mean to say that smart people are higher status than dumb people.”
It is good that you love your family. You should love your family. I am sure your relatives are lovely people, and you enjoy their company, and would be worse off without them.
But by the same token, I am grateful for the fact that I have never had polio, smallpox, or Ebola. I am thankful that I did not die in childbirth (my own or my childrens’.) I am thankful for life-saving surgeries, medications, and mass-vaccination campaigns that have massively reduced the quantity of human suffering, and I happily praise the doctors and scientists who made all of this possible.
That is why doctors and scientists are higher status than dumb people, and why math-smart people (who tend to end up in science) believe that they should have more status than verbal-smart people.
But on to #3--what is this “intelligence” and “money” connection? (And why does our questioner think it is so bad?)
The obvious answer is no, people don’t only care about intelligence in the context of making money. People also care about enjoying music and reading good books and having fun with their friends, having pleasant conversations and not dying of cancer.
But people are practical creatures, and their first priority is making sure that they and their children will eat tomorrow.
In a purely meritocratic society, more intelligent people will tend to end up in professions that require more intellect and more years of training, which will in turn allow them to demand higher wages. (So will people with rare physical talents, like athleticism and musical ability.) Unintelligent people, by contrast, will end up in the jobs that require the least thought and least training, where they will soon be replaced by robots.
The incentive to pay your doctor more than your trash collector is obvious.
The truly bright and creative, of course, will go beyond merely being employed and actually start companies, invent products/processes, and generally reshape the world around them, all of which results in making even more money.
The truly dull, by contrast, even when they can get jobs, tend to be impulsive and bad at planning, which results in the loss of what little money they have.
We do not live in a purely meritocratic society. No one does. We make efforts to that end, though, which is why public schools exist and employers are officially not supposed to consider things like race and gender when hiring people. Which means that our society is pretty close to meritocratic.
And in fact, the correlation between IQ and wealth/income is remarkably robust:
It even holds internationally:
There are a few outliers–the gulf oil states are far richer than their IQs would predict, due to oil; China is poorer than its IQ predicts, which may be due to the lingering effects of communism or due to some quirk in the nature of Chinese intelligence (either way, I expect a very China-dominant future)–but otherwise, IQ predicts average per cap GDP quite well.
Here people tend to bring up a few common objections:
1. I know a guy who is smart but poor, and a guy who is dumb but rich! Two anecdotes are totally sufficient to completely disprove a general trend calculated from millions of data points.
Yes, obviously some really smart people have no desire to go into high-paying fields, and devote their lives to art, music, volunteering with the poor, raising children, or just chilling with their friends. Some smart people have health problems, are unfairly discriminated against, live in areas with few jobs, or are otherwise unable to reach their potentials. Some dumb people luck into wealth or a high-paying job.
It would be a strange world indeed if IQ were absolute destiny.
But the existence of outliers does not negate the overall trends–smarter people tend to get jobs in higher-paying fields and manage their money more effectively; dumb people tend to get jobs in lower-paying fields and manage their money ineffectively.
2. Maybe everyone is equally smart, but just expresses it in different ways. (Corollary form: IQ is just a measure of how good you are at taking IQ tests.)
Either we mean something when we say “intelligence,” or we do not. If we want to define “intelligence” so that everyone is equally smart, then yes, everyone is equally smart. If we want to know if some people are better than others at doing math, then we find that some people are better than others at doing math. Are some people better than others at reading? Yes. Are some people better than others at football? Yes.
If you transported me tomorrow to a hunter-gatherer community, and they gave me a test of the skills necessary for survival there, I’d flunk (and die.) They’d conclude that I was an idiot who couldn’t gather her way out of a paper bag.
Very well, then.
But neither of us lives in a hunter-gatherer society, nor do we particularly care about the skills necessary to survive in one. If I want to know the kinds of intelligence that are necessary for success in industrial societies–the kind of success that may have led to the existence of industrial societies–then you’re looking at normal old “intelligence” as people conventionally use the term, measured by IQ scores, the SAT, vague impressions, or report cards.
3. “You’ve got causality backwards–people with money send their kids to expensive prep schools, which results in them learning more, which results in higher IQ scores. These “smart” kids then use family connections/prestige to land good jobs, resulting in higher wealth.”
As this shows, the heritability of IQ and of behavioral traits is consistently high, reaching into the 0.8-0.9+ range. This means, out of a group of people, at least 80-90% of the overall differences between them (known as the “variance” in statistical parlance) can be attributed to genetic differences between them. This chart shows that this becomes most evident in adulthood, when genes have been given a chance to fully express themselves. I have summed this up in a neat set of rules:
Shared environment: 0%
Something else [random chance]: 30-20%
In other words, adopted kids end up with the IQ scores you’d predict from looking at their biological parents, not their legal parents. Baring extremes of poverty or abuse, the way your parents raise you–including the quality of the schools you attend–has very little long-term effect on IQ.
On a related note, massively increased school expenditures since the ’80s has done very little to test scores:
IQ doesn’t lend itself to much environmental manipulation – indeed, interventions that attempt to boost IQ have all met with failure. As well, IQ remains predictive even when measured in youth. It is predictive even when one controls for things like socioeconomic status (say during childhood). Indeed, the best control for this, looking at different siblings within a family, finds that IQ is predictive of real world outcomes between siblings – the sibling with the higher IQ tends to do better.
Everybody wants to know why some groups or countries out perform other groups or countries, but no one likes to be told that they–or a group that they belong to–are less intelligent than others. No one wants to be in the red; everyone wants to blame their troubles on someone else.
Thus a great deal of debate; some people want to prove that the wealth and poverty of nations depends on IQ, and some people want to prove that it does not. No matter your personal opinions on the matter, it’s pretty hard to have a discussions about IQ without the debate resurfacing.
Now, I fully believe that rich people enroll their kids expensive test-prep classes, which result in small increases in SAT scores over students who’ve never seen the test before (an effect that wears off once classes are over.) It may also be that people from countries where schools barely exist look at a test and have no idea what you want them to do with it, regardless of intelligence. But if parental income were the entire story, rich whites, blacks, Hispanics, and Asians ought to all get similar SAT scores, (with the exception of verbal scores for ESL-students,) and poor whites, blacks, Hispanics, and Asians ought to all get similar, lower scores. Instead, the children of wealthy Black parents have worse SAT scores than the children of poor whites and Asians. (Except Asian verbal scores, which are pretty bad at the low end–probably an ESL-artifact.)
Regardless, a certain kind of intelligence appears to be useful for building certain kinds of societies.
Yes, there are lots of reasons to value intelligence, like making art and enjoying a good book. And there are many lifestyles that people enjoy that do not require making lots of money, nor do they have much to do with capitalism. But there exists, nonetheless, a fairly reliable correlation–at the group level–between average IQ and income/wealth/development level. Most people don’t care about this because they want to exploit each other and destroy the environment, but because they want to be well-fed, healthy, and happy.
Continuing with yesterday’s discussion (in response to a reader’s question):
Why are people snobs about intelligence?
Is math ability better than verbal?
Do people only care about intelligence in the context of making money?
1. People are snobs. Not all of them, obviously–just a lot of them.
So we’re going to have to back this up a step and ask why are people snobs, period.
Paying attention to social status–both one’s own and others’–is probably instinctual. We process social status in our prefrontal cortexes–the part of our brain generally involved in complex thought, imagination, long-term planning, personality, not being a psychopath, etc. Our brains respond positively to images of high-status items–activating reward-feedback loop that make us feel good–and negatively to images of low-status items–activating feedback loops that make us feel bad.
…researchers asked a person if the following statement was an accurate description of themselves: “I wouldn’t hesitate to go out of my way to help someone in trouble.” Some of the participants answered the question without anyone else seeing their response. Others knowingly revealed their answer to two strangers who were watching in a room next to them via video feed. The result? When the test subjects revealed an affirmative answer to an audience, their [medial prefrontal cortexes] lit up more strongly than when they kept their answers to themselves. Furthermore, when the participants revealed their positive answers not to strangers, but to those they personally held in high regard, their MPFCs and reward striatums activated even more strongly. This confirms something you’ve assuredly noticed in your own life: while we generally care about the opinions of others, we particularly care about the opinions of people who really matter to us.
(Note what constitutes a high-status activity.)
But this alone does not prove that paying attention to social status is instinctual. After all, I can also point to the part of your brain that processes written words (the Visual Word Form Area,) and yet I don’t assert that literacy is an instinct. For that matter, anything we think about has to be processed in our brains somewhere, whether instinct or not.
Better evidence comes from anthropology and zoology. According to Wikipedia, “All societies have a form of social status,” even hunter-gatherers. If something shows up in every single human society, that’s a pretty good sign that it is probably instinctual–and if it isn’t, it is so useful a thing that no society exists without it.
Among animals, social status is generally determined by a combination of physical dominance, age, relationship, and intelligence. Killer whale pods, for example, are led by the eldest female in the family; leadership in elephant herds is passed down from a deceased matriarch to her eldest daughter, even if the matriarch has surviving sisters. Male lions assert dominance by being larger and stronger than other lions.
In all of these cases, the social structure exists because it benefits the group, even if it harms some of the individuals in it. If having no social structure were beneficial for wolves, then wolf packs without alpha wolves would out-compete packs with alphas. This is the essence of natural selection.
Among humans, social status comes in two main forms, which I will call “earned” and “background.”
“Earned” social status stems from things you do, like rescuing people from burning buildings, inventing quantum physics, or stealing wallets. High status activities are generally things that benefit others, and low-status activities are generally those that harm others. This is why teachers are praised and thieves are put in prison.
Earned social status is a good thing, because it reward people for being helpful.
“Background” social status is basically stuff you were born into or have no effect over, like your race, gender, the part of the country you grew up in, your accent, name, family reputation, health/disability, etc.
Americans generally believe that you should not judge people based on background social status, but they do it, anyway.
Interestingly, high-status people are not generally violent. (Just compare crime rates by neighborhood SES.) Outside of military conquest, violence is the domain of the low-class and those afraid they are slipping in social class, not the high class. Compare Andrea Merkel to the average German far-right protester. Obviously the protester would win in a fist-fight, but Merkel is still in charge. High class people go out of their way to donate to charity, do volunteer work, and talk about how much they love refugees. In the traditional societies of the Pacific Northwest, they held potlatches at which they distributed accumulated wealth to their neighbors; in our society, the wealthy donate millionsto education. Ideally, in a well-functioning system, status is the thanks rich people get for doing things that benefit the community instead of spending their billions on gold-plated toilets.
The Arabian babbler … spends most of its life in small groups of three to 20 members. These groups lay their eggs in a communal nest and defend a small territory of trees and shrubs that provide much-needed safety from predators.
When it’s living as part of a group, a babbler does fairly well for itself. But babblers who get kicked out of a group have much bleaker prospects. These “non-territorials” are typically badgered away from other territories and forced out into the open, where they often fall prey to hawks, falcons, and other raptors. So it really pays to be part of a group. … Within a group, babblers assort themselves into a linear and fairly rigid dominance hierarchy, i.e., a pecking order. When push comes to shove, adult males always dominate adult females — but mostly males compete with males and females with females. Very occasionally, an intense “all-out” fight will erupt between two babblers of adjacent rank, typically the two highest-ranked males or the two highest-ranked females. …
Most of the time, however, babblers get along pretty well with each other. In fact, they spend a lot of effort actively helping one another and taking risks for the benefit of the group. They’ll often donate food to other group members, for example, or to the communal nestlings. They’ll also attack foreign babblers and predators who have intruded on the group’s territory, assuming personal risk in an effort to keep others safe. One particularly helpful activity is “guard duty,” in which one babbler stands sentinel at the top of a tree, watching for predators while the rest of the group scrounges for food. The babbler on guard duty not only foregoes food, but also assumes a greater risk of being preyed upon, e.g., by a hawk or falcon. …
Unlike chickens, who compete to secure more food and better roosting sites for themselves, babblers compete to give food away and to take the worst roosting sites. Each tries to be more helpful than the next. And because it’s a competition, higher-ranked (more dominant) babblers typically win, i.e., by using their dominance to interfere with the helpful activities of lower-ranked babblers. This competition is fiercest between babblers of adjacent rank. So the alpha male, for example, is especially eager to be more helpful than the beta male, but doesn’t compete nearly as much with the gamma male. Similar dynamics occur within the female ranks.
In the eighteenth and early nineteenth century, wealthy private individuals substantially supported the military, with a particular wealthy men buying stuff for a particular regiment or particular fort.
Noblemen paid high prices for military commands, and these posts were no sinecure. You got the obligation to substantially supply the logistics for your men, the duty to obey stupid orders that would very likely lead to your death, the duty to lead your men from in front while wearing a costume designed to make you particularly conspicuous, and the duty to engage in honorable personal combat, man to man, with your opposite number who was also leading his troops from in front.
A vestige of this tradition remains in that every English prince has been sent to war and has placed himself very much in harm’s way.
It seems obvious to me that a soldier being led by a member of the ruling class who is soaking up the bullets from in front is a lot more likely to be loyal and brave than a soldier sent into battle by distant rulers safely in Washington who despise him as a sexist homophobic racist murderer, that a soldier who sees his commander, a member of the ruling classes, fighting right in front of him, is reflexively likely to fight.
(Note, however, that magnanimity is not the same as niceness. The only people who are nice to everyone are store clerks and waitresses, and they’re only nice because they have to be or they’ll get fired.)
Most people are generally aware of each others’ social statuses, using contextual clues like clothing and accents to make quick, rough estimates. These contextual clues are generally completely neutral–they just happen to correlate with other behaviors.
For example, there is nothing objectively good or bad for society about wearing your pants belted beneath your buttocks, aside from it being an awkward way to wear your pants. But the style correlates with other behaviors, like crime, drug use, and aggression, low paternal investment, and unemployment, all of which are detrimental to society, and so the mere sight of underwear spilling out of a man’s pants automatically assigns him low status. There is nothing causal in this relationship–being a criminal does not make you bad at buckling your pants, nor does wearing your pants around your knees somehow inspire you to do drugs. But these things correlate, and humans are very good at learning patterns.
Likewise, there is nothing objectively better about operas than Disney movies, no real difference between a cup of coffee brewed in the microwave and one from Starbucks; a Harley Davidson and a Vespa are both motorcycles; and you can carry stuff around in just about any bag or backpack, but only the hoity-toity can afford something as objectively hideous as a $26,000 Louis Vutton backpack.
All of these things are fairly arbitrary and culturally dependent–the way you belt your pants can’t convey social status in a society where people don’t wear pants; your taste in movies couldn’t matter before movies were invented. Among hunter-gatherers, social status is based on things like one’s skills at hunting, and if I showed up to the next PTA meeting wearing a tophat and monocle, I wouldn’t get any status points at all.
We tend to aggregate the different social status markers into three broad classes (middle, upper, and lower.) As Scott Alexander says in his post about Siderea’s essay on class in America, which divides the US into 10% Underclass, 65% Working Class, 23.5% Gentry Class, and 1.5% Elite:
Siderea notes that Church’s analysis independently reached about the same conclusion as Paul Fussell’s famous guide. I’m not entirely sure how you’d judge this (everybody’s going to include lower, middle, and upper classes), but eyeballing Fussell it does look a lot like Church, so let’s grant this.
It also doesn’t sound too different from Marx. Elites sound like capitalists, Gentry like bourgeoisie, Labor like the proletariat, and the Underclass like the lumpenproletariat. Or maybe I’m making up patterns where they don’t exist; why should the class system of 21st century America be the same as that of 19th century industrial Europe?
There’s one more discussion of class I remember being influenced by, and that’s Unqualified Reservations’ Castes of the United States. Another one that you should read but that I’ll summarize in case you don’t:
1. Dalits are the underclass, … 2. Vaisyas are standard middle-class people … 3. Brahmins are very educated people … 4. Optimates are very rich WASPs … now they’re either extinct or endangered, having been pretty much absorbed into the Brahmins. …
Michael Church’s system (henceforth MC) and the Unqualified Reservation system (henceforth UR) are similar in some ways. MC’s Underclass matches Dalits, MC’s Labor matches Vaisyas, MC’s Gentry matches Brahmins, and MC’s Elite matches Optimates. This is a promising start. It’s a fourth independent pair of eyes that’s found the same thing as all the others. (commenters bring up Joel Kotkin and Archdruid Report as similar convergent perspectives).
I suspect the tendency to try to describe society as consisting of three broad classes (with the admission that other, perhaps tiny classes that don’t exactly fit into the others might exist) is actually just an artifact of being a three-biased society that likes to group things in threes (the Trinity, three-beat joke structure, three bears, Three Musketeers, three notes in a chord, etc.) This three-bias isn’t a human universal (or so I have read) but has probably been handed down to us from the Indo-Europeans, (“Many Indo-European societies know a threefold division of priests, a warriorclass, and a class of peasants or husbandmen. Georges Dumézil has suggested such a division for Proto-Indo-European society,”) so we’re so used to it that we don’t even notice ourselves doing it.
(For more information on our culture’s three-bias and different number biases in other cultures, see Alan Dundes’s Interpreting Folklore, though I should note that I read it back in highschool and so my memory of it is fuzzy.)
(Also, everyone is probably at least subconsciously cribbing Marx, who was probably cribbing from some earlier guy who cribbed from another earlier guy, who set out with the intention of demonstrating that society–divided into nobles, serfs, and villagers–reflected the Trinity, just like those Medieval maps that show the world divided into three parts or the conception of Heaven, Hell, and Purgatory.)
At any rate, I am skeptical of any system that lumps 65% of people into one social class and 0.5% of people into a different social class as being potentially too-finely grained at one end of the scale and not enough at the other. Determining the exact number of social classes in American society may ultimately be futile–perhaps there really are three (or four) highly distinct groups, or perhaps social classes transition smoothly from one to the next with no sharp divisions.
I lean toward the latter theory, with broad social classes as merely a convenient shorthand for extremely broad generalizations about society. If you look any closer, you tend to find that people do draw finer-grained distinctions between themselves and others than “65% Working Class” would imply. For example, a friend who works in agriculture in Greater Appalachia once referred dismissively to other people they had to deal with as “red necks.” I might not be able to tell what differentiates them, but clearly my friend could. Similarly, I am informed that there are different sorts of homelessness, from true street living to surviving in shelters, and that lifetime homeless people are a different breed altogether. I might call them all “homeless,” but to the homeless, these distinctions are important.
Is social class evil?
This question was suggested by a different friend.
I suspect that social class is basically, for the most part, neutral-to-useful. I base this on the fact that most people do not work very hard to erase markers of class distinction, but instead actively embrace particular class markers. (Besides, you can’t get rid of it, anyway.)
It is not all that hard to learn the norms and values of a different social class and strategically employ them. Black people frequently switch between speaking African American Vernacular English at home and standard English at work; I can discuss religion with Christian conservatives and malevolent AI risk with nerds; you can purchase a Harley Davidson t-shirt as easily as a French beret and scarf.
(I am reminded here of an experiment in which researchers were looking to document cab drivers refusing to pick up black passengers; they found that when the black passengers were dressed nicely, drivers would pick them up, but when they wore “ghetto” clothes, the cabs wouldn’t. Cabbies: responding more to perceived class than race.)
And yet, people don’t–for the most part–mass adopt the social markers of the upper class just to fool them. They love their motorcycle t-shirts, their pumpkin lattes, even their regional accents. Class markers are an important part of peoples’ cultural / tribal identities.
But what about class conflicts?
Because every class has its own norms and values, every class is, to some degree, disagreeing with the other classes. People for whom frugality and thrift are virtues will naturally think that people who drink overpriced coffee are lacking in moral character. People for whom anti-racism is the highest virtue will naturally think that Trump voters are despicable racists. A Southern Baptist sees atheists as morally depraved fetus murderers; nerds and jocks are famously opposed to each other; and people who believe that you should graduate from college, become established in your career, get married, and then have 0-1.5 children disapprove of people who drop out of highschool, have a bunch of children with a bunch of different people, and go on welfare.
A moderate sense of pride in one’s own culture is probably good and healthy, but spending too much energy hating other groups is probably negative–you may end up needlessly hurting people whose cooperation you would have benefited from, reducing everyone’s well-being.
(A good chunk of our political system’s dysfunctions are probably due to some social classes believing that other social classes despise them and are voting against their interests, and so counter-voting to screw over the first social class. I know at least one person who switched allegiance from Hillary to Trump almost entirely to stick it to liberals they think look down on them for classist reasons.)
Ultimately, though, social class is with us whether we like it or not. Even if a full generation of orphan children were raised with no knowledge of their origins and completely equal treatment by society at large, each would end up marrying/associating with people who have personalities similar to themselves (and remember that genetics plays a large role in personality.) Just as current social classes in America are ethnically different, (Southern whites are drawn from different European populations than Northern whites, for example,) so would the society resulting from our orphanage experiment differentiate into genetically and personalityish-similar groups.
Why do Americans generally proclaim their opposition to judging others based on background status, and then act classist, anyway? There are two main reasons.
As already discussed, different classes have real disagreements with each other. Even if I think I shouldn’t judge others, I can’t put aside my moral disgust at certain behaviors just because they happen to correlate with different classes.
It sounds good to say nice, magnanimous things that make you sound more socially sensitive and aware than others, like, “I wouldn’t hesitate to go out of my way to help someone in trouble.” So people like to say these things whether they really mean them or not.
In reality, people are far less magnanimous than they like to claim they are in front of their friends. People like to say that we should help the homeless and save the whales and feed all of the starving children in Africa, but few people actually go out of their way to do such things.
There is a reason Mother Teresa is considered a saint, not an archetype.
In real life, not only does magnanimity has a cost, (which the rich can better afford,) but if you don’t live up to your claims, people will notice. If you talk a good talk about loving others but actually mistreat them, people will decide that you’re a hypocrite. On the internet, you can post memes for free without havng to back them up with real action, causing discussions to descend into competitive-virtue signalling in which no one wants to be the first person to admit that they actually are occasionally self-interested. (Cory Doctorow has a relevant discussion about how “reputations economies”–especially internet-based ones–can go horribly wrong.)
Unfortunately, people often confuse background and achieved status.
American society officially has no hereditary social classes–no nobility, no professions limited legally to certain ethnicities, no serfs, no Dalits, no castes, etc. Officially, if you can do the job, you are supposed to get it.
Most of us believe, at least abstractly, that you shouldn’t judge or discriminate against others for background status factors they have no control over, like where they were born, the accent thy speak with, or their skin tone. If I have two resumes, one from someone named Lakeesha, and the other from someone named Ian William Esquire III, I am supposed to consider each on their merits, rather than the connotations their names invoke.
But because “status” is complicated, people often go beyond advocating against “background” status and also advocate that we shouldn’t accord social status for any reasons. That is, full social equality.
This is not possible and would be deeply immoral in practice.
When you need heart surgery, you really hope that the guy cutting you open is a top-notch heart surgeon. When you’re flying in an airplane, you hope that both the pilot and the guys who built the plane are highly skilled. Chefs must be good at cooking and authors good at writing.
These are all forms of earned status, and they are good.
Smart people are valuable to society because they do nice things like save you from heart attacks or invent cell-phones. This is not “winning at capitalism;” this is benefiting everyone around them. In this context, I’m happy to let smart people have high status.
In a hunter-gatherer society, smart people are the ones who know the most about where animals live and how to track them, how to get water during a drought, and where that 1-inch stem they spotted last season that means a tasty underground tuber is located. Among nomads, smart people are the ones with the biggest mental maps of the territory, the folks who know the safest and quickest routes from good summer pasture to good winter pasture, how to save an animal from dying and how to heal a sick person. Among pre-literate people, smart people composed epic poems that entertained their neighbors for many winters’ nights, and among literate ones, the smart people became scribes and accountants. Even the communists valued smart people, when they weren’t chopping their heads off for being bourgeois scum.
So even if we say, abstractly, “I value all people, no matter how smart they are,” the smart people do more of the stuff that benefits society than the dumb people, which means they end up with higher social status.
So, yes, high IQ is a high social status marker, and low IQ is a low social status marker, and thus at least some people will be snobs about signaling their IQ and their disdain for dumb people.
I am speaking here very abstractly. There are plenty of “high status” people who are not benefiting society at all. Plenty of people who use their status to destroy society while simultaneously enriching themselves. And yes, someone can come into a community, strip out all of its resources and leave behind pollution and unemployment, and happily call it “capitalism” and enjoy high status as a result.
I would be very happy if we could stop engaging in competitive holiness spirals and stop lionizing people who became wealthy by destroying communities. I don’t want capitalism at the expense of having a pleasant place to live in.
ὃν οἱ θεοὶ φιλοῦσιν, ἀποθνῄσκει νέος — he whom the gods love dies young. (Meander)
Harpending wasn’t particularly young, nor was his death unexpected, but I am still sad; I have enjoyed his work for years, and there will be no more. Steve Sailer has a nice eulogy.
In less tragic HBD-osphere news, it looks like Peter Frost has stopped writing his blog, Evo and Proud, due to Canadian laws prohibiting free speech. (There has been much discussion of this on the Frost’s posts that were carried over on Unz; ultimately, the antisemitism of many Unz commentators made it too dangerous for Frost to continue blogging, even though his posts actually had nothing to do with Judaism.)
Back to our subject: This is an attempt to answer–coherently–a friend’s inquiry.
Why are people snobs about intelligence?
Is math ability better than verbal?
Do people only care about intelligence in the context of making money?
We’re going to tackle the easiest question first, #2. No, math ability is not actually better than verbal ability.
Imagine two people. Person A–we’ll call her Alice–has exceptional verbal ability. She probably has a job as a journalist, novelist, poet, or screenwriter. She understands other people’s emotions and excels at interacting verbally with others. But she sucks at math. Not just suck; she struggles counting to ten.
Alice is going to have a rough time handling money. In fact, Alice will probably be completely dependent on the other people around them to handle money for them. Otherwise, however, Alice will probably have a pretty pleasant life.
Of course, if Alice happened to live in a hunter-gatherer society where people don’t use numbers over 5, she would not stand out at all. Alice could be a highly respected oral poet or storyteller–perhaps her society’s version of an encyclopedia, considered wise and knowledgeable about a whole range of things.
Now consider Person B–we’ll call her Betty. Betty has exceptional math ability, but can only say a handful of words and cannot intuit other people’s emotions.
Betty is screwed.
Here’s the twist: #2 is a trick question.
Verbal and mathematical ability are strongly correlated in pretty much everyone who hasn’t had brain damage (so long as you are looking at people from the same society). Yes, people like to talk about “multiple intelligences,” like “kinesthetic” and “musical” intelligence. It turns out that most of these are correlated. (The one exception may be kinesthetic, about which I have heard conflicting reports. I swear I read a study somewhere which found that sports players are smarter than sports watchers, but all I’m finding now are reports that athletes are pretty dumb.)
Yes, many–perhaps most–people are better at one skill than another. This effect is generally small–we’re talking about people who get A+ in English and only B+s in math, not people who get A+ in English but Fs in math.
The effect may be more pronounced for people at the extremes of high-IQ–that is, someone who is three standard deviations above the norm in math may be only slightly above average in verbal, and vice versa–but professional authors are not generally innumerate, nor are mathematicians and scientists unable to read and write. (In fact, their professions require constantly writing papers for publication and reading the papers published by their colleagues.)
All forms of “intelligence” probably rely, at a basic level, on bodily well-being. Your brain is a physical object inside your body, and if you do not have the material bits necessary for well-being, your brain will suffer. When you haven’t slept in a long time, your ability to think goes down the tubes. If you haven’t eaten in several days (or perhaps just this morning), you will find it difficult to think. If you are sick or in pain, again, you will have trouble thinking.
Healthy people have an easier time thinking, and this applies across the board to all forms of thought–mathematical, verbal, emotional, kinesthetic, musical, etc.
“Health” here doesn’t just include things we normally associate with it, like eating enough vegetables and swearing to the dentist that this time, you’re really going to floss. It probably also includes minute genetic variations in how efficient your body is at building and repairing tissues; chemicals or viruses you were exposed to in-utero; epigenetics, etc.
So where does this notion that math and science are better than English and feelings come from, anyway?
A. Math (and science) are disciplines with (fairly) objective answers. If I ask you, “What’s 2+2?” we can determine pretty easily whether you got it correct. This makes mathematical ability difficult to fudge and easy to verify.
Verbal disciplines, by contrast, are notoriously fuzzy:
riverrun, past Eve and Adam’s, from swerve of shore to bend
I scowl with frustration at myself in the mirror. Damn my hair – it just won’t behave, and damn Katherine Kavanagh for being ill and subjecting me to this ordeal. I should be studying for my final exams, which are next week, yet here I am trying to brush my hair into submission. I must not sleep with it wet. I must not sleep with it wet. Reciting this mantra several times, I attempt, once more, to bring it under control with the brush. I roll my eyes in exasperation and gaze at the pale, brown-haired girl with blue eyes too big for her face staring back at me, and give up. My only option is to restrain my wayward hair in a ponytail and hope that I look semi presentable.
Best-seller, or Mary Sue dreck?
And what does this mean:
Within that conflictual economy of colonial discourse which Edward Said describes as the tension between the synchronic panoptical vision of domination – the demand for identity, stasis – and the counterpressure of the diachrony of history – change, difference – mimicry represents an ironic compromise. If I may adapt Samuel Weber’s formulation of the marginalizing vision of castration, then colonial mimicry is the desire for a reformed, recognizable Other, as a subject of a difference that is almost the same, but not quite. Which is to say, that the discourse of mimicry is constructed around an ambivalence; in order to be effective, mimicry must continually produce its slippage, its excess, its difference. (source)
If we’re going to argue about who’s smartest, it’s much easier if we can assign a number to everyone and declare that the person with the biggest number wins. The SAT makes a valiant effort at quantifying verbal knowledge like the number of words you can accurately use, but it is very hard to articulate what makes a text so great that Harvard University would hire the guy who wrote it.
B. The products of science have immediately obvious, useful applications, while the products of verbal abilities appear more superficial and superfluous.
Where would we be today without the polio vaccine, internal combustion engines, or the transistor? What language would we be writing in if no one had cracked the Enigma code, or if the Nazis had not made Albert Einstein a persona non grata? How many of us used computers, TVs, or microwaves? And let’s not forget all of the science that has gone into breeding and raising massively more caloric strains of wheat, corn, chicken, beef, etc., to assuage the world’s hunger.
We now live in a country where too much food is our greatest health problem!
If I had to pick between the polio vaccine and War and Peace, I’d pick the vaccine, even if every minute spent with Tolstoy is a minute of happiness. (Except when *spoilers spoilers* and then I cry.)
But literature is not the only product of verbal ability; we wouldn’t be able to tell other people about our scientific discoveries if it weren’t for language.
Highly verbal people are good at communication and so help keep the gears of modern society turning, which is probably why La Griffe du Lion found that national per capita GDP correlated more closely with verbal IQ scores than composite or mathematical scores.
Of course, as noted, these scores are highly correlated–so the whole business is really kind of moot.
So where does this notion come from?
In reality, high-verbal people tend to be more respected and better paid than high-math people. No, not novelists–novelists get paid crap. But average pay for lawyers–high verbal–is much better than average pay for mathematicians. Scientists are poorly paid compared to other folks with similar IQs and do badly on the dating market; normal people frequently bond over their lack of math ability.
“Math is hard. Let’s go shopping!” — Barbie
Even at the elementary level, math and science are given short shrift. How many schools have a “library” for math and science exploration in the same way they have a “library” for books? I have seen the lower elementary curriculum; kindergarteners are expected to read small books and write full sentences, but by the end of the year, they are only expected to count to 20 and add/subtract numbers up to 5. (eg, 1+4, 2+3, 3-2, etc.)
The claim that math/science abilities are more important than verbal abilities probably stems primarily from high-math/science people who recognize their fields’ contributions to so many important parts of modern life and are annoyed (or angry) about the lack of recognition they receive.
I was originally going to use La Griffe du Lion’s Smart Fraction Theory to calculate this, but then I discovered that it doesn’t make any practical difference, so went with the simpler metric of IQ.
We have a correlation, but it’s not huge. There are a few states that seem like obvious outliers–the two states with the highest GDP per cap were Alaska (oil) and Delaware (tax haven of some sort.) Among under-performers, I speculate that Maine is being held back by geography (it’s really cold.) California has a low average IQ, but an abnormally wide IQ range, due to the presence of Stanford and Silicon Valley and the like, while West Virginia may have the opposite problem of an unusually narrow IQ range (it also has the problem of being in the mountains.) In these two cases, if I could actually calculate the smart fraction instead of using Griffe’s assumption of Gaussian distribution around the average, I’d probably get a more accurate result.
I decided to try running the regression again without the states with obvious external factors–California, Hawaii, Nevada, Alaska, West Virginia, Delaware, Maine, and Vermont–like tourism, climate, gambling, or oil. I did not eliminate outliers that did not have (potentially) clear reasons for their under- or over- performance (for example, I have no idea why Idaho should do worse than Wyoming. I also left in Louisiana, whose over-performance may be due to having a significant port and/or tourism.)
Random chance matters. An oil boom in your area, nice beaches, or a long, harsh winter can push a state (or country) into wealth or poverty.
I suspect that redistribution strategies (ie, welfare) prevent states from dropping below a certain level, hence the near-flat line around $32,000. (Outliers at Mississippi and W. Virginia.)
One of the subjects people care most about in ev psych and related disciplines is intelligence. Teachers would love it if all of their students suddenly began scoring in the 90th %; so would parents, of course. Tons of psychological studies have been done on subjects like “Do people score better on tests after thinking about famous scientists,” (without finding much useful,) not to mention millions of dollars spent on education reform without, as far as I can tell, much real change in school performances.
Since “IQ”–our best attempt at measuring and quantifying intelligence–appears to be at least 50% genetic, genes are a good spot to look when attempting to unravel the mystery of genius.
One of my theories on the subject is that if there are two kinds of dumb, perhaps there are two kinds of smart. Obviously dropping someone on their head is probably not going to result in genius, but perhaps there are some people who are smart due to having the good luck to have a variety of genes that generally code for things leading to high IQ, while other people are smart because they have a few particular genes or mutations. The folks with the generally IQ-boosting all-around genes are people who come from a background of parents and extended families with similar IQs to themselves, but folks with rare, particular, or novel mutations/genes would likely stand out even from their families. Such genes might have deleterious side effects or only confer genius in one or two particular arenas, resulting in, say, the stereotypical absent-minded professor or idiot savants.
If genius is fragile–my definition of fragile, not necessarily anyone else’s–then it is easily damaged; the difference between high-IQ and low-IQ in a particular population will be related to the possession of deleterious mutations that damage IQ. If IQ is not fragile–that is, if it is robust–then we would find rare, beneficial genes that boost IQ.
Environmentally, it is already obvious that genius is fragile–that is, it is much easier to drop someone one their head and subtract 40 IQ points than to find any intervention that will reliably add 40 points, but this does not necessarily preclude a variety of interesting genetic findings.
Perhaps I am thinking about this all wrong, but that’s the structure I’ve got worked out so far.
Anyway, so people have been searching for genes linked to IQ. Will they find specific IQ-boosting genes that highly intelligent people have, but dump people don’t? Or will they find specific IQ-damaging genes that dumb people have but intelligent people don’t? (Or maybe a combination of both?)
So, Neuroscience News recently covered a study published in Molecular Psychology that looked at genetic differences between highly intelligent people and the general population.
Now, I’m going to have to stop and point out a potential design flaw, at least according to the article:
“Published today in Molecular Psychiatry, the King’s College London study selected 1,400 high-intelligence individuals from the Duke University Talent Identification Program. Representing the top 0.03 per cent of the ‘intelligence distribution’, these individuals have an IQ of 170 or more – substantially higher than that of Nobel Prize winners, who have an average IQ of around 145.”
Duke TIP is aimed at middle schoolers, based largely on their elementary school test scores Anything that starts out by comparing the IQs of elementary school kids to people who’ve already won Nobel Prizes may not be saying much.
Second, I’d just like to note that while the article is unclear, they are probably not claiming that all Duke TIP participants have IQs over 170, since they don’t–Duke TIP’s own website states that they only require IQ scores over 125. Rather, I suspect they used the test scores submitted to the TIP program to select students with IQs over 170. If some confusion has occurred and they actually used people with 125s, well, results may not be as claimed.
Quick rough calculations indicate that 1,400 people in the top 0.03% is not an unreasonable number, since it would only require 4.667 million people, and there are about 4 million kids per grade level in the US, TIP takes from multiple grades, and they could have used multiple years’ worth of participants. But I don’t know how many kids TIP takes each year.
“The study focused, for the first time, on rare, functional SNPs – rare because previous research had only considered common SNPs and functional because these are SNPs that are likely to cause differences in the creation of proteins.
“The researchers did not find any individual protein-altering SNPs that met strict criteria for differences between the high-intelligence group and the control group. However, for SNPs that showed some difference between the groups, the rare allele was less frequently observed in the high intelligence group. This observation is consistent with research indicating that rare functional alleles are more often detrimental than beneficial to intelligence. …
‘Rare functional alleles do not account for much on their own but in combination, their impact is significant.
‘Our research shows that there are not genes for genius. However, to have super-high intelligence you need to have many of the positive alleles and importantly few of the negative rare effects, such as the rare functional alleles identified in our study.’
Or as the abstract puts it:
We did not observe any individual protein-altering variants that are reproducibly associated with extremely high intelligence and within the entire distribution of intelligence.* Moreover, no significant associations were found for multiple rare alleles within individual genes. However, analyses using genome-wide similarity between unrelated individuals (genome-wide complex trait analysis) indicate that the genotyped functional protein-altering variation yields a heritability estimate of 17.4% (s.e. 1.7%) based on a liability model. In addition, investigation of nominally significant associations revealed fewer rare alleles associated with extremely high intelligence than would be expected under the null hypothesis. This observation is consistent with the hypothesis that rare functional alleles are more frequently detrimental than beneficial to intelligence.
*What does “and within the entire distribution of intelligence” mean in this sentence?
To be honest, I’m not sure about the interpretation that only genetic differences between high IQ and low IQ people is that the low-IQ have more deleterious mutations and the high-IQ don’t. For starters, we observe ethnic variation in IQ scores, and I find it difficult to believe that vast swathes of the planet, some of which have very different marriage patterns, have abnormally high levels of deleterious, fitness-reducing mutations that other swathes of the planet don’t.
I certainly can believe, though, that there are deleterious mutations that reduce IQ.
Time Preference isn’t sexy and exciting, like anything related to, well, sex. It isn’t controversial like IQ and gender. In fact, most of the ink spilled on the subject isn’t even found in evolutionary or evolutionary psychology texts, but over in economics papers about things like interest rates that no one but economists would want to read.
So why do I think Time Preference is so important?
Time Preference (aka future time orientation, time discounting, delay discounting, temporal discounting,) is the degree to which you value having a particular item today versus having it tomorrow. “High time preference” means you want things right now, whereas “low time preference” means you’re willing to wait.
A relatively famous test of Time Preference is to offer a child a cookie right now, but tell them they can have two cookies if they wait 10 minutes. Some children take the cookie right now, some wait ten minutes, and some try to wait ten minutes but succumb to the cookie right now about halfway through.
Obviously, many factors can influence your Time Preference–if you haven’t eaten in several days, for example, you’ll probably not only eat the cookie right away, but also start punching me until I give you the second cookie. If you don’t like cookies, you won’t have any trouble waiting for another, but you won’t have much to do with it. Etc. But all these things held equal, your basic inclination toward high or low time preference is probably biological–and by “biological,” I mean, “mostly genetic.”
The scientists train rats to touch pictures with their noses in return for sugar cubes. Picture A gives them one cube right away, while picture B gives them more cubes after a delay. If the delay is too long or the reward too small, the rats just take the one cube right away. But there’s a sweet spot–apparently 4 cubes after a short wait—where the rats will figure it’s worth their while to tap picture B instead of picture A.
But if you snip the connection between the rats’ hippocampi and nucleus accumbenses, suddenly they lose all ability to wait for sugar cubes and just eat their sugar cubes right now, like a pack of golden retrievers in a room full of squeaky toys. They become completely unable to wait for the better payout of four sugar cubes, no matter how much they might want to.
So we know that this connection between the hippocampus and the nucleus accumbens is vitally important to your Time Orientation, though I don’t know what other modifications, such as low hippocampal volume or low nucleus accumbens would do.
So what do the hippocampus and nucleus accumbens do?
According to the Wikipedia, the hippocampus plays an important part in inhibition, memory, and spatial orientation. People with damaged hippocampi become amnesiacs, unable to form new memories.There is a pretty direct relationship between hippocampus size and memory, as documented primarily in old people:
“There is, however, a reliable relationship between the size of the hippocampus and memory performance — meaning that not all elderly people show hippocampal shrinkage, but those who do tend to perform less well on some memory tasks. There are also reports that memory tasks tend to produce less hippocampal activation in elderly than in young subjects. Furthermore, a randomized-control study published in 2011 found that aerobic exercise could increase the size of the hippocampus in adults aged 55 to 80 and also improve spatial memory.” (wikipedia)
Amnesiacs (and Alzheimer’s patients) also get lost a lot, which seems like a perfectly natural side effect of not being able to remember where you are, except that rat experiments show something even more interesting: specific cells that light up as the rats move around, encoding data about where they are.
“Neural activity sampled from 30 to 40 randomly chosen place cells carries enough information to allow a rat’s location to be reconstructed with high confidence.” (wikipedia)
According to Wikipedia, the Inhibition function theory is a little older, but seems like a perfectly reasonable theory to me.
“[Inhibition function theory] derived much of its justification from two observations: first, that animals with hippocampal damage tend to be hyperactive; second, that animals with hippocampal damage often have difficulty learning to inhibit responses that they have previously been taught, especially if the response requires remaining quiet as in a passive avoidance test.”
This is, of course, exactly what the scientists found when they separated the rats’ hippocampi from their nucleus accumbenses–they lost all ability to inhibit their impulses in order to delay gratification, even for a better payout.
In other word, the hippocampus lets you learn, process the moment of objects through space (spatial reasoning) and helps you suppress your inhibitions–that is, it is directly involved in IQ and Time Preference.
Dopaminergic input from the VTA modulate the activity of neurons within the nucleus accumbens. These neurons are activated directly or indirectly by euphoriant drugs (e.g., amphetamine, opiates, etc.) and by participating in rewarding experiences (e.g., sex, music, exercise, etc.). …
The shell of the nucleus accumbens is involved in the cognitive processing of motivational salience (wanting) as well as reward perception and positive reinforcement effects. Particularly important are the effects of drug and naturally rewarding stimuli on the NAc shell because these effects are related to addiction.Addictive drugs have a larger effect on dopamine release in the shell than in the core. The specific subset of ventral tegmental area projection neurons that synapse onto the D1-type medium spiny neurons in the shell are responsible for the immediate perception of the rewarding property of a stimulus (e.g., drug reward). …
The nucleus accumbens core is involved in the cognitive processing of motor function related to reward and reinforcement. Specifically, the core encodes new motor programs which facilitate the acquisition of a given reward in the future.
So it sounds to me like the point of the nucleus accumbens is to learn “That was awesome! Let’s do it again!” or “That was bad! Let’s not do it again!”
Together, the nucleus accumbens + hippocampus can learn “4 sugar cubes in a few seconds is way better than 1 sugar cube right now.” Apart, the nucleus accumbens just says, “Sugar cubes! Sugar cubes! Sugar cubes!” and jams the lever that says “Sugar cube right now!” and there is nothing the hippocampus can do about it.
What distinguishes humans from all other animals? Our big brains, intellects, or impressive vocabularies?
It is our ability to acquire new knowledge and use it to plan and build complex, multi-generational societies.
Ants and bees live in complex societies, but they do not plan them. Monkeys, dolphins, squirrels, and even rats can plan for the future, but only humans plan and build cities.
Even the hunter-gatherer must plan for the future; a small tendril only a few inches high is noted during the wet season, then returned to in the dry, when it is little more than a withered stem, and the water-storing root beneath it harvested. The farmer facing winter stores up grain and wood; the city engineer plans a water and sewer system large enough to handle the next hundred years’ projected growth.
All of these activities require the interaction between the hippocampus and nucleus accumbens. The nucleus accumbens tells us that water is good, grain is tasty, fire is warm, and that clean drinking water and flushable toilets are awesome. The hippocampus reminds us that the dry season is coming, and so we should save–and remember–that root until we need it. It reminds us that we will be cold and hungry in winter if we don’t save our grain and spend a hours and hours chopping wood right now. It reminds us that not only is it good to organize the city so that everyone can have clean drinking water and flushable toilets right now, but that we should also make sure the system will keep working even as new people enter the city over time.
Disconnect these two, and your ability to plan goes down the drain. You eat all of your roots now, devour your seed corn, refuse to chop wood, and say, well, yes, running water would be nice, but that would require so much planning.
As I have mentioned before, I think Europeans (and probably a few other groups whose history I’m just not as familiar with and so I cannot comment on) IQ increased quite a bit in the past thousand years or so, and not just because the Catholic Church banned cousin marriage. During this time, manorialism became a big deal throughout Western Europe, and the people who exhibited good impulse control, worked hard, delayed gratification, and were able to accurately calculate the long-term effects of their actions tended to succeed (that is, have lots of children) and pass on their clever traits to their children. I suspect that selective pressure for “be a good manorial employee” was particularly strong in German, (and possibly Japan, now that I think about it,) resulting in the Germanic rigidity that makes them such good engineers.
Nothing in the manorial environment directly selected for engineering ability, higher math, large vocabularies, or really anything that we mean when we normally talk about IQ. But I do expect manorial life to select for those who could control their impulses and plan for the future, resulting in a run-away effect of increasingly clever people constructing increasingly complex societies in which people had to be increasingly good at dealing with complexity and planning to survive.
Ultimately, I see pure mathematical ability as a side effect of being able to accurately predict the effects of one’s actions and plan for the future (eg, “It will be an extra long winter, so I will need extra bushels of corn,”) and the ability to plan for the future as a side effect of being able to accurately represent the path of objects through space and remember lessons one has learned. All of these things, ultimately, are the same operations, just oriented differently through the space-time continuum.
Since your brain is, of course, built from the same DNA code as the rest of you, we would expect brain functions to have some amount of genetic heritablity, which is exactly what we find:
“A meta-analysis of twin, family and adoption studies was conducted to estimate the magnitude of genetic and environmental influences on impulsivity. The best fitting model for 41 key studies (58 independent samples from 14 month old infants to adults; N=27,147) included equal proportions of variance due to genetic (0.50) and non-shared environmental (0.50) influences, with genetic effects being both additive (0.38) and non-additive (0.12). Shared environmental effects were unimportant in explaining individual differences in impulsivity. Age, sex, and study design (twin vs. adoption) were all significant moderators of the magnitude of genetic and environmental influences on impulsivity. The relative contribution of genetic effects (broad sense heritability) and unique environmental effects were also found to be important throughout development from childhood to adulthood. Total genetic effects were found to be important for all ages, but appeared to be strongest in children. Analyses also demonstrated that genetic effects appeared to be stronger in males than in females.”
“Shared environmental effects” in a study like this means “the environment you and your siblings grew up in, like your household and school.” In this case, shared effects were unimportant–that means that parenting had no effect on the impulsivity of adopted children raised together in the same household. Non-shared environmental influences are basically random–you bumped your head as a kid, your mom drank during pregnancy, you were really hungry or pissed off during the test, etc., and maybe even cultural norms.
So your ability to plan for the future appears to be part genetic, and part random luck.
While researching the previous post, I came across a claim that the Pygmies are retarded due to having IQs around 55.
No, the Pygmies are not retarded.
If you’ve already read Two Kinds of Dumb, you already know why, and don’t need to continue on. But if you’ve just wandered in, here’s the quick and dirty version:
An actual diagnosis of mental retardation requires not only a low IQ score (I think the bar is 75 but could be 70, I forget,) but also major life impairments. That is, the person must be unable to do, unsupervised, the normal things people do to function, like hold down a job, get dressed, or feed themselves.
While I don’t know the exact IQs of the pygmies, all of the evidence I’ve seen suggest that the average is probably pretty low. For starters, books are heavy, so hunter-gatherers tend not to carry them around, which has a real impact on the average hunter-gather’s ability to read. Second, hunter-gatherers tend not to conduct much trade, so they tend not to need much in the way of mathematics. Some groups don’t even have words for numbers over three. Such groups tend to score lousily on math tests.
I’ve searched high and low for whether or not Pygmy languages contain words for numbers over 3, and come up with nada. But I think Pymies tend to use a lot of words from other languages/be multi-lingual, so if the Pygmies are speaking some other language they picked up from an agricultural tribe, the language could easily have a full suite of number words whether the Pygmies had any interest in numbers or not.
Third, given neither books nor maths in Pygmy history, it’s unlikely that there’s been any selective pressure on the Pygmies to adapt to readin’ and ‘rithmetic.
Fourth, there is a pretty strong correlation between IQ scores and technological complexity. You don’t have to think of IQ as “intelligence” if you don’t want to, but whatever it is, it is necessary for building technologically complex societies. If the hunter-gatherer lifestyle is your thing, then you don’t need much in the way of IQ.
And fifth, their heads are kind of small. Unfortunately, brains have to go somewhere, and this poses a limit on grey matter.
That said, Pygmies are perfectly functional in their environment. They can hunt and gather their own food, carry on some trade with their neighbors, build their own houses, make their own clothes, get dressed, cook, take care of their children (one Wikipedia article claims that one Pygmy group has some of the highest level of fatherly involvement in child-rearing in the world,) are bi- and tri-lingual, and otherwise conduct their lives.
If you and I got dropped in the rainforest, we’d probably die within three or four days.
To over-simplify, mental retardation is generally caused by some form of traumatic brain injury, say, by getting dropped on your head as a child, eating lead, or being born with an extra chromosome. These injuries change your IQ from what it should have been, and cause a general loss of brain functioning.
If you live in a society where the average IQ is 100, then the average person you meet with a 50 IQ is most likely someone who suffered a traumatic injury.
However, if you live in a society where the average IQ is 50, this is the normal, um-injured IQ of people in your society. It just means that people in your society are bad at reading and math, not that they were all dropped on their heads as infants and cannot care for themselves.
“But wait,” I hear you saying, “what if Pygmy low IQ is caused by malnutrition? After all, they ARE pretty short.”
Doubtful. There’s no reason to think that Pygmies would have been more malnourished than all of their neighbors for thousands of years (we have records going back that far.) Also, their height is genetic (see studies on Pygmy genetics,) not due to malnutrition. According to Westhunter, an average-heighted person would have to starve to death twice before mere malnourishment would make them as short as a pygmy.
Are Pygmies human?
I’ve also come across this question during my research, so I think it bears addressing.
Look, the term “human” is a social construct. So is the whole concept of “species.” You can come up with a personal definition of “human,” if you feel like it, that doesn’t include the Pygmies. Certainly their neighbors, who rape, murder, eat, and enslave the Pygmies (and sometimes evict them to make more room for gorillas,) do not regard the Pygmies as human. Personally, I look down on the Pygmies’ neighbors for their despicable behavior toward the Pygmies, rather than look down on the Pygmies for their stature and lifestyle.
Practically speaking, people only declare other groups of people “not humans” in order to justify killing them. I have no desire to kill the Pygmies; it seems more pleasant to me to live in a world where Pygmies exist, while still recognizing them as one of the most genetically distinct groups on Earth.