I do not believe that IQ tests measure intelligence. Rather I believe that they measure a combination of intelligence, learning and concentration at a particular point in time. …
You may wish to read the whole thing there.
The short response is that I basically agree with the bit quoted, and I suspect that virtually everyone who takes IQ tests seriously does as well. We all know that if you come into an IQ test hungover, sick, and desperately needing to pee, you’ll do worse than if you’re well-rested, well-fed, and feeling fine.
That time I fell asleep during finals?
Not so good.
Folks who study IQ for a living, like the famous Flynn, believe that environmental effects like the elimination of leaded gasoline and general improvements in nutrition have raised average IQ scores over the past century or two. (Which I agree seems pretty likely.)
The ability to sit still and concentrate is especially variable in small children–little boys are especially notorious for preferring to run and play instead of sit at a desk and solve problems. And while real IQ tests (as opposed to the SAT) have been designed not to hinge on whether or not a student has learned a particular word or fact, the effects of environmental “enrichment” such as better schools or high-IQ adoptive parents do show up in children’s test scores–but fade away as children grow up.
There’s a very sensible reason for this. I am reminded here of an experiment I read about some years ago: infants (probably about one year old) were divided into two groups, and one group was taught how to climb the stairs. Six months later, the special-instruction group was still better at stair-climbing than the no-instruction group. But two years later, both groups of children were equally skilled at stair-climbing.
There is only so good anyone will ever get at stair-climbing, after all, and after two years of practice, everyone is about equally talented.
The sensible conclusion is that we should never evaluate an entire person based on just one IQ test result (especially in childhood.)
The mistake some people (not Chuancey Tinker) make is to jump from “IQ tests are not 100% reliable” to “IQ tests are meaningless.” Life is complicated, and people like to sort it into neat little packages. Friend or foe, right or wrong. And while single IQ test is insufficient to judge an entire person, the results of multiple IQ tests are fairly reliable–and if we aggregate our results over multiple people, we get even better results.
As with all data, more tests + more people => random incorrect data matters less.
I think the “IQ tests are meaningless” crowd is operating under the assumption that IQ scholars are actually dumb enough to blindly judge an entire person based on a single childhood test. (Dealing with this strawman becomes endlessly annoying.)
So this complicated looking graph shows us the effects of different factors on IQ scores over time, using several different data sets (mostly twins studies.)
At 5 years old, “genetic” factors, (the diamond and thick lines) are less important than “shared environment.” Shared environment=parenting and teachers.
That is, at the age of 5, a pair of identical twins who were adopted by two different families will have IQ scores that look more like their adoptive parents’ IQ scores than their genetic relatives’ IQ scores. Like the babies taught to climb stairs before their peers, the kids whose parents have been working hard to teach them their ABCs score better than kids whose parents haven’t.
By the age of 7, however, this parenting effect has become less important than genetics. This means that those adopted kids are now starting to have IQ scores more similar to their biological relatives than to their adoptive relatives. Like the kids from the stair-climbing experiment, their scores are now more based on their genetic abilities (some kids have better balance and coordination, resulting in better stair-climbing) than on whatever their parents are doing with them.
By the age of 12, the effects of parenting drop to around 0. At this point, it’s all up to the kid.
Of course, adoption studies are not perfect–adoptive parents are not randomly selected and have to go through various hoops to prove that they will be decent parents, and so tend not to be the kinds of people who lock their children in closets or refuse to feed them. I am sure this kind of parenting does terrible things to IQ, but there is no ethical way to design a randomized study to test them. Thankfully, the % of children subject to such abysmal parenting is very low. Within the normal range of parenting practices, parenting doesn’t appear to have much (if any) effect on adult IQ.
The point of all this is that what I think Chauncey means by “learning,” that is, advantages some students have over others because they’ve learned a particular fact or method before the others do, does appear to have an effect on childhood IQ scores, but this effect fades with age.
I think Pumpkin Person is fond of saying that life is the ultimate IQ test.
While we can probably all attest to a friend who is “smart but lazy,” or smart but interested in a field that doesn’t pay very well, like art or parenting, the correlation between IQ and life outcomes (eg, money) are amazingly solid:
And if this makes us feel mercenary, well, other traits also correlate:Shamelessly stolen from Jayman.
There’s a simple reason why this correlation holds despite lazy and non-money-oriented smart people: there are also lazy and non-money-oriented dumb people, and lazy smart people tend to make more money and make better long-term financial decisions than lazy dumb people.
Note that none of these graphs are the result of a single test. A single test would, indeed, be useless.
One of my kids enjoys watching YouTube cooking videos, and they’re nearly 100% women making cakes.
Women’s magazines focus exclusively on 4 topics: men, fashion, diets, and cupcakes. You might think that diets and cupcakes are incompatible, but women’s magazines believe otherwise:
Just in case it’s not clear, that is not a watermellon. It is cake, cleverly disguised as a watermellon.
(YouTube has videos that show you how to make much better cake watermellons–for starters, you want red velvet cake for the middle, not just frosting…)
Magazines specifically aimed at “people who want to make cakes” are also overwhelmingly feminine. Whether we’re talking wedding cakes or chocolate cravings, apple pastries or donuts, sweets and women just seem to go together.
If men’s magazines ever feature food, I bet they’re steak and BBQ. (*Image searches*)
Yup.
The meat-related articles do appear to be a little more gender-neutral than the cupcake-related articles–probably because men don’t tend to decorate their steaks with tiny baseball bats cut out of steak the way women like to decorate their cakes with tiny flowers made out of frosting.
It’s almost as if women have some kind of overwhelming craving for fats and sugars that men don’t really share.
I was talking with a friend recently about their workplace, where, “All of the women are on diets, but none of them can stay on their diets because they are all constantly eating at their workstations.” Further inquiries revealed that yes, they are eating sweets and pastries, not cashews and carrots, and that there is some kind of “office culture” of all of the women eating pastries together.
The irony here is pretty obvious.
Even many (most?) specialty “diet” foods are designed to still taste sweet. “Fat-free” yogurt is marketed as a health food even though it has as much sugar in it as a bowl of ice cream. Women are so attracted to the taste of sweet sodas, they drink disgusting Diet Coke. Dieting websites advise us that cake topped with fruit is “healthy.”
When men diet, they think “eat nothing but protein until ketosis kicks in” sounds like a great idea. When women diet, they want fat-free icecream.
I don’t think it is just “women lack willpower.” (Or at least, not willpower in the sense of something people have much control over.) Rather, I think that men and women actually have substantially different food cravings.
So do children, for that matter.
Throughout most of human history, from hunter-gatherers to agriculturalists, the vast majority of women have specialized in obtaining (gathering, tending, harvesting,) plants. (The only exceptions are societies where people don’t eat plants, like the Inuit and the Masai, and our modern society, where most of us aren’t involved in food production.) By contrast, men have specialized in hunting, raising, and butchering animals–not because they were trying to hog the protein or had some sexist ideas about food production, but because animals tend to be bigger and heavier than women can easily lift. Dragging home and butchering large game requires significant strength.
I am inventing a “Just So” story, of course. But it seems sensible enough that each gender evolved a tendency to crave the particular kinds of foods it was most adept at obtaining.
Exercise wears down muscles; protein is necessary to build them back up. Protein fuels active lifestyles, and active lifestyles, in turn, require protein. Our male ancestors’ most important activities were most likely heavy labor (eg, building huts, hauling firewood, butchering game,) and defending the tribe. Our female ancestors’ most important activities were giving birth and nursing children (we would not exist had they not, after all.) For these activities, women want to be fat. It’s not good enough to put on weight after you get pregnant, when the growing fetus is already dependent on its mother for nutrients. Far better for a woman to be plump before she gets pregnant (and to stay that way long after.)
Of course, this is “fat” by historical standards, not modern American standards.
I suspect, therefore, that women are naturally inclined to eat as much as possible of sweet foods in order to put on weight in preparation for pregnancy and lactation–only today, the average woman has 2 pregnancies instead of 12, and so instead of turning that extra weight into children and milk, it just builds up.
Obviously we are talking about a relatively small effect on food preferences, both because our ancestors could not afford to be too picky about what they ate, and because the genetic difference between men and women is slight–not like the difference between humans and lizards, say.
Interestingly, gender expression in humans appears to basically be female by default. If, by random chance, you are born with only one X chromosome, (instead of the normal XX or XY,) you can still survive. Sure, you’ll be short, you probably won’t menstruate, and you’ll likely have a variety of other issues, but you’ll be alive. By contrast, if you received only a Y chromosome from your parents and no accompanying X, you wouldn’t be here reading this post. You can’t survive with just a Y. Too many necessary proteins are encoded on the X.
Gender differences show up even in fetuses, but don’t become a huge deal until puberty, when the production of androgens and estrogens really cranks up.
Take muscle development: muscle development relies on the production of androgens (eg, testosterone.) Grownups produce more androgens than small children, and men produce more than women. Children can exercise and certainly children who do daily farm chores are stronger than children who sit on their butts watching TV all day, but children can’t do intense strength-training because they just don’t produce enough androgens to build big muscles. Women, likewise, produce fewer androgens, and so cannot build muscles at the same rate as men, though obviously they are stronger than children.
At puberty, boys begin producing the androgens that allow them to build muscles and become significantly stronger than girls.
Sans androgens, even XY people develop as female. (See Androgen Insensitivity Syndrome, in which people with XY chromosomes cannot absorb the androgens their bodies create, and so develop as female.) Children produce some androgens (obviously,) but not nearly as many as adults. Pre-pubescent boys, therefore, are more “feminine,” biologically, than post-pubescent men; puberty induces maleness.
All children seem pretty much obsessed with sweets, far more than adults. If allowed, they will happily eat cake until they vomit.
Even though food seems like a realm where evolution would heavily influence our tastes, it’s pretty obvious that culture has a huge effect. I doubt Jews have a natural aversion to pork or Hindus to beef. Whether you think chicken hearts are tasty or vomitous is almost entirely dependent on whether or not they are a common food in your culture.
But small children are blissfully less attuned to culture than grownups. Like little id machines, they spit out strained peas and throw them on the floor. They do not care about our notion that “vegetables are good for you.” This from someone who’ll eat bird poop if you let them.
The child’s affection for sweets, therefore, I suspect is completely natural and instinctual. Before the invention of refined sugars and modern food distribution systems, it probably kept them alive and healthy. Remember that the whole reason grownups try to eat more vegetables is that vegetables are low in calories. Grownups have larger stomachs and so can eat more than children, allowing them to extract adequate calories from low-calorie foods, but small children do not and cannot. In developing countries, children still have trouble getting enough calories despite abundant food in areas where that food is low-calorie plants, which they just cannot physically eat enough of. Children, therefore, are obsessed with high-calorie foods.
At puberty, this instinct changes for boys–orienting them more toward protein sources, which they are going to have to expend a lot of energy trying to haul back to their families for the rest of their lives, but stays basically unchanged in females.
ETA: I have found two more sources/items of relevance:
When it comes to what we eat, men and women behave differently: Men consume more beef, eggs, and poultry; while women eat more fruits and vegetables and consume less fat than do men. … The gender differences in preferences for healthier foods begin in childhood. Previous literature has found that girls choose healthier food and are fonder of fruits and vegetables than are boys. Boys rated beef, processed meat, and eggs as more desirable than did girls. …
Sensory (taste) differences between the genders are the second most widely ventured explanation for the differences in food choices, although it is not clear that such genetic differences actually exist. While the popular media argue that females prefer sweetness and dislike bitterness, while males may enjoy bitterness, academic literature on this matter is less conclusive. The bitter taste receptor, gene TAS2R38, has been associated with the ability to taste PROP (6-n-propylthiouracil),
one source of genetic variation in PROP and PTC taste. Individuals who experience bitterness strongly are assumed to also experience sweetness strongly relative to those who experience PROP as only slightly bitter. While previous studies found that inherited taste-blindness to bitter compounds such as PROP may be a risk factor for obesity, this literature has been hotly disputed.
The distribution of perceived bitterness of PROP differs among women and men, as does the correlation between genetic taste measures and acceptance of sweetness. A higher percentage of women are PROP and PTC tasters, sensing bitterness above threshold. It has been suggested that women are more likely to be supertasters, or those who taste with far greater intensity than average.
(I have removed the in-line citations for ease of reading; please refer to the original if you want them.)
Also:
Well, I don’t remember where this graph came from, but it looks like my intuitions were pretty good. males and females both have very low levels of testosterone during childhood, and duing puberty their levels become radically different.
Honestly, left to my own devices, I wouldn’t own a TV. (With Mythbusters canceled, what’s the point anymore?)
Don’t get me wrong. I have watched (and even enjoyed) the occasional sitcom. I’ve even tried watching football. I like comedies. They’re funny. But after they end, I get that creeping feeling of emptiness inside, like when you’ve eaten a bowl of leftover Halloween candy instead of lunch. There is no “meat” to these programs–or vegan-friendly vegetable protein, if you prefer.
I do enjoy documentaries, though I often end up fast-forwarding through large chunks of them because they are full of filler shots of rotating galaxies or astronomers parking their telescopes or people… taalkiiing… sooo… sloooowwwwlllly… And sadly, if you’ve seen one documentary about ancient Egypt, you’ve seen them all.
Ultimately, time is a big factor: I am always running short. Once I’m done with the non-negotiables (like “take care of the kids” and “pay the bills,”) there’s only so much time left, and time spent watching TV is time not spent writing. Since becoming a competent writer is one of my personal goals, TV gets punted to the bottom of the list, slightly below doing the dishes.
Obviously not everyone writes, but I have a dozen other backup projects for when I’m not writing, everything from “read more books” to “volunteer” to “exercise.”
I think it is a common fallacy to default to assuming that other people are like oneself. I default to assuming that other people are time-crunched, running on 8 shots of espresso and trying to cram in a little time to read Tolstoy and get the tomatoes planted before they fall asleep. (And I’m not even one of those Type-A people.)
Obviously everyone isn’t like me. They come home from work, take care of their kids, make dinner, and flip on the TV.
Why?
An acquaintance recently made a sad but illuminating comment regarding their favorite TV shows, “I know they’re not real, but it feels like they are. It’s like they’re my friends.”
I think the simple answer is that we process the pictures on the TV as though they were real. TV people look like people and sound like people, so who cares if they don’t smell like people? Under normal (pre-TV) circumstances, if you hung out with some friendly, laughing people every day in your living room, they were your family. You liked them, they liked you, and you were happy together.
Today, in our atomized world of single parents, only children, spinsters and eternal bachelors, what families do we have? Sure, we see endless quantities of people on our way to work, but we barely speak, nod, or glance at each other, encapsulated within our own cars or occupied with checking Facebook on our cellphones while the train rumbles on.
As our connections to other people have withered away, we’ve replaced them with fake ones.
OZZIE & HARRIET: The Adventures of America’s Favorite Family
The Adventures of Ozzie and Harriet was the first and longest-running family situational comedy in television history. The Nelsons came to represent the idealized American family of the 1950s – where mom was a content homemaker, dad’s biggest decision was whether to give his sons the keys to the car, and the boys’ biggest problem was getting a date to the high school prom. …When it premiered, Ozzie & Harriet: The Adventures of America’s Favorite Family was the highest-rated documentary in A&E’s history.
(According to Wikipedia, Ozzie and Harriet started on the radio back in the 30s, got a comedy show (still on radio) in 1944, and were on TV from 1952-1966.) It was, to some extent, about a real family–the actors in the show were an actual husband and wife + their kids, but the show itself was fictionalized.
It even makes sense to people to ask them, “Who is your favorite TV personality?“–to which the most common answer isn’t Adam Savage or James Hyneman, but Mark Harmon, who plays some made-up guy named Leroy Jethro Gibbs.
The rise of “reality TV” only makes the “people want to think of the TV people as real people they’re actually hanging out with” all the more palpable–and then there’s the incessant newsstand harping of celebrity gossip. The only thing I want out of a movie star (besides talent) is that I not recognize them; it appears that the only thing everyone else wants is that they do recognize them.
in Blockbusters: Hit-Making, Risk-Taking, and the Big Business of Entertainment,the new book by Anita Elberse, Filene professor of business administration. Elberse (el-BER-see) spent 10 years interviewing and observing film, television, publishing, and sports executives to distill the most profitable strategy for these high-profile, unpredictable marketplaces. … The most profitable business strategy, she says, is not the “long tail,” but its converse: blockbusters like Star Wars, Avatar, Friends, the Harry Potter series, and sports superstars like Tom Brady.
Strategically, the blockbuster approach involves “making disproportionately big investments in a few products designed to appeal to mass audiences,” … “Production value” means star actors and special effects. … a studio can afford only a few “event movies” per year. But Horn’s big bets for Warner Brothers—the Harry Potter series, The Dark Knight, The Hangover and its sequel, Ocean’s Eleven and its two sequels, Sherlock Holmes—drew huge audiences. By 2011, Warner became the first movie studio to surpass $1 billion in domestic box-office receipts for 11 consecutive years. …
Jeff Zucker ’86 put a contrasting plan into place as CEO at NBC Universal. In 2007 he led a push to cut the television network’s programming costs: … Silverman began cutting back on expensive dramatic content, instead acquiring rights to more reasonably priced properties; eschewing star actors and prominent TV producers, who commanded hefty fees; and authorizing fewer costly pilots for new series. The result was that by 2010, NBC was no longer the top-rated TV network, but had fallen to fourth place behind ABC, CBS, and Fox, and “was farther behind on all the metrics that mattered,” writes Elberse, “including, by all accounts, the profit margins Zucker and Silverman had sought most.” Zucker was asked to leave his job in 2010. …
From a business perspective, “bankable” movies stars like Julia Roberts, Johnny Depp, or George Clooney function in much the way Harry Potter and Superman do: providing a known, well-liked persona.
So people like seeing familiar faces in their movies (except Oprah Winfrey, who is apparently not a draw:
the 1998 film Beloved, starring Oprah Winfrey, based on Nobel Prize-winner Toni Morrison’s eponymous 1987 novel and directed by Oscar-winner Jonathan Demme … flopped resoundingly: produced for $80 million, it sold only $23 million in tickets.
Or maybe Beloved isn’t just the kind of feel-good action flick that drives movie audiences the way Batman is.)
But what about sports?
Here I am on even shakier ground than sitcoms. I can understand playing sports–they’re live action versions of video games, after all. You get to move around, exercise, have fun with your friends, and triumphantly beat them at something. (Or if you’re me, lose.) I can understand cheering for your kids and being proud of them as they get better and better at some athletic skill (or at least try hard at it.)
I don’t understand caring about strangers playing a game.
I have no friends on the Yankees or the Mets, the Phillies or the Marlins. I’ve never met a member of the Alabama Crimson Tide or the Clemson Tigers, and I harbor no illusions that my children will ever play on such teams. I feel no loyalty to the athletes-drawn-from-all-over-the-country who play on my “hometown” team, and I consider athlete salaries vaguely obscene.
I find televised sports about as interesting as watching someone do math. If the point of the game is to win, then why not just watch a 5-minute summary at the end of the day of all the teams’ wins and losses?
But according to The Way of the Blockbuster:
Perhaps no entertainment realm takes greater care in building a brand name than professional sports: fan loyalty reliably builds repeat business. “The NFL is blockbuster content,” Elberse says. “It’s the most sought-after content we have in this country. Four of the five highest-rated television shows [in the United States] ever are Super Bowls. NFL fans spend an average of 9.5 hours per week on games and related content. That gives the league enormous power when it comes to negotiating contracts with television networks.”
Elberse has studied American football and basketball and European soccer, and found that selling pro sports has much in common with selling movies, TV shows, or books. Look at the Real Madrid soccer club—the world’s richest, with annual revenues of $693 million and a valuation of $3.3 billion. Like Hollywood studios, Real Madrid attracts fan interest by engaging superstars—such as Cristiano Ronaldo, the Portuguese forward the club acquired from Manchester United for a record $131.6 million in 2009. “We think of ourselves as content producers,” a Real Madrid executive told Elberse, “and we think of our product—the match—as a movie.” As she puts it: “It might not have Tom Cruise in it, but they do have Cristiano Ronaldo starring.
In America, sports stars are famous enough that even I know some of their names, like Peyton Manning, Serena Williams, and Michel Jackson Jordan.
I think the basic drive behind people’s love of TV sports is the same as their love of sitcoms (and dramas): they process it as real. And not just real, but as people they know: their family, their tribe. Those are their boys out there, battling for glory and victory against that other tribes’s boys. It’s vicarious warfare with psuedo armies, a domesticated expression of the tribal urge to slaughter your enemies, drive off their cattle and abduct their women. So what if the army isn’t “real,” if the heroes aren’t your brother or cousin but paid gladiators shipped in from thousands of miles away to perform for the masses? Your brain still interprets it as though it were; you still enjoy it.
Continuing with yesterday’s discussion (in response to a reader’s question):
Why are people snobs about intelligence?
Is math ability better than verbal?
Do people only care about intelligence in the context of making money?
1. People are snobs. Not all of them, obviously–just a lot of them.
So we’re going to have to back this up a step and ask why are people snobs, period.
Paying attention to social status–both one’s own and others’–is probably instinctual. We process social status in our prefrontal cortexes–the part of our brain generally involved in complex thought, imagination, long-term planning, personality, not being a psychopath, etc. Our brains respond positively to images of high-status items–activating reward-feedback loop that make us feel good–and negatively to images of low-status items–activating feedback loops that make us feel bad.
The mental effect is stronger when we perform high-status actions in front of others:
…researchers asked a person if the following statement was an accurate description of themselves: “I wouldn’t hesitate to go out of my way to help someone in trouble.” Some of the participants answered the question without anyone else seeing their response. Others knowingly revealed their answer to two strangers who were watching in a room next to them via video feed. The result? When the test subjects revealed an affirmative answer to an audience, their [medial prefrontal cortexes] lit up more strongly than when they kept their answers to themselves. Furthermore, when the participants revealed their positive answers not to strangers, but to those they personally held in high regard, their MPFCs and reward striatums activated even more strongly. This confirms something you’ve assuredly noticed in your own life: while we generally care about the opinions of others, we particularly care about the opinions of people who really matter to us.
(Note what constitutes a high-status activity.)
But this alone does not prove that paying attention to social status is instinctual. After all, I can also point to the part of your brain that processes written words (the Visual Word Form Area,) and yet I don’t assert that literacy is an instinct. For that matter, anything we think about has to be processed in our brains somewhere, whether instinct or not.
Better evidence comes from anthropology and zoology. According to Wikipedia, “All societies have a form of social status,” even hunter-gatherers. If something shows up in every single human society, that’s a pretty good sign that it is probably instinctual–and if it isn’t, it is so useful a thing that no society exists without it.
Even animals have social status–“Social status hierarchies have been documented in a wide range of animals: apes,[7] baboons,[8] wolves,[9] cows/bulls,[10] hens,[11] even fish,[12] and ants.[13]” We may also add horses, many monkey species, elephants, killer whales, reindeer, and probably just about all animals that live in large groups.
Among animals, social status is generally determined by a combination of physical dominance, age, relationship, and intelligence. Killer whale pods, for example, are led by the eldest female in the family; leadership in elephant herds is passed down from a deceased matriarch to her eldest daughter, even if the matriarch has surviving sisters. Male lions assert dominance by being larger and stronger than other lions.
In all of these cases, the social structure exists because it benefits the group, even if it harms some of the individuals in it. If having no social structure were beneficial for wolves, then wolf packs without alpha wolves would out-compete packs with alphas. This is the essence of natural selection.
Among humans, social status comes in two main forms, which I will call “earned” and “background.”
“Earned” social status stems from things you do, like rescuing people from burning buildings, inventing quantum physics, or stealing wallets. High status activities are generally things that benefit others, and low-status activities are generally those that harm others. This is why teachers are praised and thieves are put in prison.
Earned social status is a good thing, because it reward people for being helpful.
“Background” social status is basically stuff you were born into or have no effect over, like your race, gender, the part of the country you grew up in, your accent, name, family reputation, health/disability, etc.
Americans generally believe that you should not judge people based on background social status, but they do it, anyway.
Interestingly, high-status people are not generally violent. (Just compare crime rates by neighborhood SES.) Outside of military conquest, violence is the domain of the low-class and those afraid they are slipping in social class, not the high class. Compare Andrea Merkel to the average German far-right protester. Obviously the protester would win in a fist-fight, but Merkel is still in charge. High class people go out of their way to donate to charity, do volunteer work, and talk about how much they love refugees. In the traditional societies of the Pacific Northwest, they held potlatches at which they distributed accumulated wealth to their neighbors; in our society, the wealthy donate millionsto education. Ideally, in a well-functioning system, status is the thanks rich people get for doing things that benefit the community instead of spending their billions on gold-plated toilets.
The Arabian babbler … spends most of its life in small groups of three to 20 members. These groups lay their eggs in a communal nest and defend a small territory of trees and shrubs that provide much-needed safety from predators.
When it’s living as part of a group, a babbler does fairly well for itself. But babblers who get kicked out of a group have much bleaker prospects. These “non-territorials” are typically badgered away from other territories and forced out into the open, where they often fall prey to hawks, falcons, and other raptors. So it really pays to be part of a group. … Within a group, babblers assort themselves into a linear and fairly rigid dominance hierarchy, i.e., a pecking order. When push comes to shove, adult males always dominate adult females — but mostly males compete with males and females with females. Very occasionally, an intense “all-out” fight will erupt between two babblers of adjacent rank, typically the two highest-ranked males or the two highest-ranked females. …
Most of the time, however, babblers get along pretty well with each other. In fact, they spend a lot of effort actively helping one another and taking risks for the benefit of the group. They’ll often donate food to other group members, for example, or to the communal nestlings. They’ll also attack foreign babblers and predators who have intruded on the group’s territory, assuming personal risk in an effort to keep others safe. One particularly helpful activity is “guard duty,” in which one babbler stands sentinel at the top of a tree, watching for predators while the rest of the group scrounges for food. The babbler on guard duty not only foregoes food, but also assumes a greater risk of being preyed upon, e.g., by a hawk or falcon. …
Unlike chickens, who compete to secure more food and better roosting sites for themselves, babblers compete to give food away and to take the worst roosting sites. Each tries to be more helpful than the next. And because it’s a competition, higher-ranked (more dominant) babblers typically win, i.e., by using their dominance to interfere with the helpful activities of lower-ranked babblers. This competition is fiercest between babblers of adjacent rank. So the alpha male, for example, is especially eager to be more helpful than the beta male, but doesn’t compete nearly as much with the gamma male. Similar dynamics occur within the female ranks.
In the eighteenth and early nineteenth century, wealthy private individuals substantially supported the military, with a particular wealthy men buying stuff for a particular regiment or particular fort.
Noblemen paid high prices for military commands, and these posts were no sinecure. You got the obligation to substantially supply the logistics for your men, the duty to obey stupid orders that would very likely lead to your death, the duty to lead your men from in front while wearing a costume designed to make you particularly conspicuous, and the duty to engage in honorable personal combat, man to man, with your opposite number who was also leading his troops from in front.
A vestige of this tradition remains in that every English prince has been sent to war and has placed himself very much in harm’s way.
It seems obvious to me that a soldier being led by a member of the ruling class who is soaking up the bullets from in front is a lot more likely to be loyal and brave than a soldier sent into battle by distant rulers safely in Washington who despise him as a sexist homophobic racist murderer, that a soldier who sees his commander, a member of the ruling classes, fighting right in front of him, is reflexively likely to fight.
(Note, however, that magnanimity is not the same as niceness. The only people who are nice to everyone are store clerks and waitresses, and they’re only nice because they have to be or they’ll get fired.)
Most people are generally aware of each others’ social statuses, using contextual clues like clothing and accents to make quick, rough estimates. These contextual clues are generally completely neutral–they just happen to correlate with other behaviors.
For example, there is nothing objectively good or bad for society about wearing your pants belted beneath your buttocks, aside from it being an awkward way to wear your pants. But the style correlates with other behaviors, like crime, drug use, and aggression, low paternal investment, and unemployment, all of which are detrimental to society, and so the mere sight of underwear spilling out of a man’s pants automatically assigns him low status. There is nothing causal in this relationship–being a criminal does not make you bad at buckling your pants, nor does wearing your pants around your knees somehow inspire you to do drugs. But these things correlate, and humans are very good at learning patterns.
Likewise, there is nothing objectively better about operas than Disney movies, no real difference between a cup of coffee brewed in the microwave and one from Starbucks; a Harley Davidson and a Vespa are both motorcycles; and you can carry stuff around in just about any bag or backpack, but only the hoity-toity can afford something as objectively hideous as a $26,000 Louis Vutton backpack.
All of these things are fairly arbitrary and culturally dependent–the way you belt your pants can’t convey social status in a society where people don’t wear pants; your taste in movies couldn’t matter before movies were invented. Among hunter-gatherers, social status is based on things like one’s skills at hunting, and if I showed up to the next PTA meeting wearing a tophat and monocle, I wouldn’t get any status points at all.
We tend to aggregate the different social status markers into three broad classes (middle, upper, and lower.) As Scott Alexander says in his post about Siderea’s essay on class in America, which divides the US into 10% Underclass, 65% Working Class, 23.5% Gentry Class, and 1.5% Elite:
Siderea notes that Church’s analysis independently reached about the same conclusion as Paul Fussell’s famous guide. I’m not entirely sure how you’d judge this (everybody’s going to include lower, middle, and upper classes), but eyeballing Fussell it does look a lot like Church, so let’s grant this.
It also doesn’t sound too different from Marx. Elites sound like capitalists, Gentry like bourgeoisie, Labor like the proletariat, and the Underclass like the lumpenproletariat. Or maybe I’m making up patterns where they don’t exist; why should the class system of 21st century America be the same as that of 19th century industrial Europe?
There’s one more discussion of class I remember being influenced by, and that’s Unqualified Reservations’ Castes of the United States. Another one that you should read but that I’ll summarize in case you don’t:
1. Dalits are the underclass, … 2. Vaisyas are standard middle-class people … 3. Brahmins are very educated people … 4. Optimates are very rich WASPs … now they’re either extinct or endangered, having been pretty much absorbed into the Brahmins. …
Michael Church’s system (henceforth MC) and the Unqualified Reservation system (henceforth UR) are similar in some ways. MC’s Underclass matches Dalits, MC’s Labor matches Vaisyas, MC’s Gentry matches Brahmins, and MC’s Elite matches Optimates. This is a promising start. It’s a fourth independent pair of eyes that’s found the same thing as all the others. (commenters bring up Joel Kotkin and Archdruid Report as similar convergent perspectives).
I suspect the tendency to try to describe society as consisting of three broad classes (with the admission that other, perhaps tiny classes that don’t exactly fit into the others might exist) is actually just an artifact of being a three-biased society that likes to group things in threes (the Trinity, three-beat joke structure, three bears, Three Musketeers, three notes in a chord, etc.) This three-bias isn’t a human universal (or so I have read) but has probably been handed down to us from the Indo-Europeans, (“Many Indo-European societies know a threefold division of priests, a warriorclass, and a class of peasants or husbandmen. Georges Dumézil has suggested such a division for Proto-Indo-European society,”) so we’re so used to it that we don’t even notice ourselves doing it.
(For more information on our culture’s three-bias and different number biases in other cultures, see Alan Dundes’s Interpreting Folklore, though I should note that I read it back in highschool and so my memory of it is fuzzy.)
(Also, everyone is probably at least subconsciously cribbing Marx, who was probably cribbing from some earlier guy who cribbed from another earlier guy, who set out with the intention of demonstrating that society–divided into nobles, serfs, and villagers–reflected the Trinity, just like those Medieval maps that show the world divided into three parts or the conception of Heaven, Hell, and Purgatory.)
At any rate, I am skeptical of any system that lumps 65% of people into one social class and 0.5% of people into a different social class as being potentially too-finely grained at one end of the scale and not enough at the other. Determining the exact number of social classes in American society may ultimately be futile–perhaps there really are three (or four) highly distinct groups, or perhaps social classes transition smoothly from one to the next with no sharp divisions.
I lean toward the latter theory, with broad social classes as merely a convenient shorthand for extremely broad generalizations about society. If you look any closer, you tend to find that people do draw finer-grained distinctions between themselves and others than “65% Working Class” would imply. For example, a friend who works in agriculture in Greater Appalachia once referred dismissively to other people they had to deal with as “red necks.” I might not be able to tell what differentiates them, but clearly my friend could. Similarly, I am informed that there are different sorts of homelessness, from true street living to surviving in shelters, and that lifetime homeless people are a different breed altogether. I might call them all “homeless,” but to the homeless, these distinctions are important.
Is social class evil?
This question was suggested by a different friend.
I suspect that social class is basically, for the most part, neutral-to-useful. I base this on the fact that most people do not work very hard to erase markers of class distinction, but instead actively embrace particular class markers. (Besides, you can’t get rid of it, anyway.)
It is not all that hard to learn the norms and values of a different social class and strategically employ them. Black people frequently switch between speaking African American Vernacular English at home and standard English at work; I can discuss religion with Christian conservatives and malevolent AI risk with nerds; you can purchase a Harley Davidson t-shirt as easily as a French beret and scarf.
(I am reminded here of an experiment in which researchers were looking to document cab drivers refusing to pick up black passengers; they found that when the black passengers were dressed nicely, drivers would pick them up, but when they wore “ghetto” clothes, the cabs wouldn’t. Cabbies: responding more to perceived class than race.)
And yet, people don’t–for the most part–mass adopt the social markers of the upper class just to fool them. They love their motorcycle t-shirts, their pumpkin lattes, even their regional accents. Class markers are an important part of peoples’ cultural / tribal identities.
But what about class conflicts?
Because every class has its own norms and values, every class is, to some degree, disagreeing with the other classes. People for whom frugality and thrift are virtues will naturally think that people who drink overpriced coffee are lacking in moral character. People for whom anti-racism is the highest virtue will naturally think that Trump voters are despicable racists. A Southern Baptist sees atheists as morally depraved fetus murderers; nerds and jocks are famously opposed to each other; and people who believe that you should graduate from college, become established in your career, get married, and then have 0-1.5 children disapprove of people who drop out of highschool, have a bunch of children with a bunch of different people, and go on welfare.
A moderate sense of pride in one’s own culture is probably good and healthy, but spending too much energy hating other groups is probably negative–you may end up needlessly hurting people whose cooperation you would have benefited from, reducing everyone’s well-being.
(A good chunk of our political system’s dysfunctions are probably due to some social classes believing that other social classes despise them and are voting against their interests, and so counter-voting to screw over the first social class. I know at least one person who switched allegiance from Hillary to Trump almost entirely to stick it to liberals they think look down on them for classist reasons.)
Ultimately, though, social class is with us whether we like it or not. Even if a full generation of orphan children were raised with no knowledge of their origins and completely equal treatment by society at large, each would end up marrying/associating with people who have personalities similar to themselves (and remember that genetics plays a large role in personality.) Just as current social classes in America are ethnically different, (Southern whites are drawn from different European populations than Northern whites, for example,) so would the society resulting from our orphanage experiment differentiate into genetically and personalityish-similar groups.
Why do Americans generally proclaim their opposition to judging others based on background status, and then act classist, anyway? There are two main reasons.
As already discussed, different classes have real disagreements with each other. Even if I think I shouldn’t judge others, I can’t put aside my moral disgust at certain behaviors just because they happen to correlate with different classes.
It sounds good to say nice, magnanimous things that make you sound more socially sensitive and aware than others, like, “I wouldn’t hesitate to go out of my way to help someone in trouble.” So people like to say these things whether they really mean them or not.
In reality, people are far less magnanimous than they like to claim they are in front of their friends. People like to say that we should help the homeless and save the whales and feed all of the starving children in Africa, but few people actually go out of their way to do such things.
There is a reason Mother Teresa is considered a saint, not an archetype.
In real life, not only does magnanimity has a cost, (which the rich can better afford,) but if you don’t live up to your claims, people will notice. If you talk a good talk about loving others but actually mistreat them, people will decide that you’re a hypocrite. On the internet, you can post memes for free without havng to back them up with real action, causing discussions to descend into competitive-virtue signalling in which no one wants to be the first person to admit that they actually are occasionally self-interested. (Cory Doctorow has a relevant discussion about how “reputations economies”–especially internet-based ones–can go horribly wrong.)
Unfortunately, people often confuse background and achieved status.
American society officially has no hereditary social classes–no nobility, no professions limited legally to certain ethnicities, no serfs, no Dalits, no castes, etc. Officially, if you can do the job, you are supposed to get it.
Most of us believe, at least abstractly, that you shouldn’t judge or discriminate against others for background status factors they have no control over, like where they were born, the accent thy speak with, or their skin tone. If I have two resumes, one from someone named Lakeesha, and the other from someone named Ian William Esquire III, I am supposed to consider each on their merits, rather than the connotations their names invoke.
But because “status” is complicated, people often go beyond advocating against “background” status and also advocate that we shouldn’t accord social status for any reasons. That is, full social equality.
This is not possible and would be deeply immoral in practice.
When you need heart surgery, you really hope that the guy cutting you open is a top-notch heart surgeon. When you’re flying in an airplane, you hope that both the pilot and the guys who built the plane are highly skilled. Chefs must be good at cooking and authors good at writing.
These are all forms of earned status, and they are good.
Smart people are valuable to society because they do nice things like save you from heart attacks or invent cell-phones. This is not “winning at capitalism;” this is benefiting everyone around them. In this context, I’m happy to let smart people have high status.
In a hunter-gatherer society, smart people are the ones who know the most about where animals live and how to track them, how to get water during a drought, and where that 1-inch stem they spotted last season that means a tasty underground tuber is located. Among nomads, smart people are the ones with the biggest mental maps of the territory, the folks who know the safest and quickest routes from good summer pasture to good winter pasture, how to save an animal from dying and how to heal a sick person. Among pre-literate people, smart people composed epic poems that entertained their neighbors for many winters’ nights, and among literate ones, the smart people became scribes and accountants. Even the communists valued smart people, when they weren’t chopping their heads off for being bourgeois scum.
So even if we say, abstractly, “I value all people, no matter how smart they are,” the smart people do more of the stuff that benefits society than the dumb people, which means they end up with higher social status.
So, yes, high IQ is a high social status marker, and low IQ is a low social status marker, and thus at least some people will be snobs about signaling their IQ and their disdain for dumb people.
BUT.
I am speaking here very abstractly. There are plenty of “high status” people who are not benefiting society at all. Plenty of people who use their status to destroy society while simultaneously enriching themselves. And yes, someone can come into a community, strip out all of its resources and leave behind pollution and unemployment, and happily call it “capitalism” and enjoy high status as a result.
I would be very happy if we could stop engaging in competitive holiness spirals and stop lionizing people who became wealthy by destroying communities. I don’t want capitalism at the expense of having a pleasant place to live in.
ὃν οἱ θεοὶ φιλοῦσιν, ἀποθνῄσκει νέος — he whom the gods love dies young. (Meander)
Harpending wasn’t particularly young, nor was his death unexpected, but I am still sad; I have enjoyed his work for years, and there will be no more. Steve Sailer has a nice eulogy.
In less tragic HBD-osphere news, it looks like Peter Frost has stopped writing his blog, Evo and Proud, due to Canadian laws prohibiting free speech. (There has been much discussion of this on the Frost’s posts that were carried over on Unz; ultimately, the antisemitism of many Unz commentators made it too dangerous for Frost to continue blogging, even though his posts actually had nothing to do with Judaism.)
Back to our subject: This is an attempt to answer–coherently–a friend’s inquiry.
Why are people snobs about intelligence?
Is math ability better than verbal?
Do people only care about intelligence in the context of making money?
We’re going to tackle the easiest question first, #2. No, math ability is not actually better than verbal ability.
Imagine two people. Person A–we’ll call her Alice–has exceptional verbal ability. She probably has a job as a journalist, novelist, poet, or screenwriter. She understands other people’s emotions and excels at interacting verbally with others. But she sucks at math. Not just suck; she struggles counting to ten.
Alice is going to have a rough time handling money. In fact, Alice will probably be completely dependent on the other people around them to handle money for them. Otherwise, however, Alice will probably have a pretty pleasant life.
Of course, if Alice happened to live in a hunter-gatherer society where people don’t use numbers over 5, she would not stand out at all. Alice could be a highly respected oral poet or storyteller–perhaps her society’s version of an encyclopedia, considered wise and knowledgeable about a whole range of things.
Now consider Person B–we’ll call her Betty. Betty has exceptional math ability, but can only say a handful of words and cannot intuit other people’s emotions.
Betty is screwed.
Here’s the twist: #2 is a trick question.
Verbal and mathematical ability are strongly correlated in pretty much everyone who hasn’t had brain damage (so long as you are looking at people from the same society). Yes, people like to talk about “multiple intelligences,” like “kinesthetic” and “musical” intelligence. It turns out that most of these are correlated. (The one exception may be kinesthetic, about which I have heard conflicting reports. I swear I read a study somewhere which found that sports players are smarter than sports watchers, but all I’m finding now are reports that athletes are pretty dumb.)
Yes, many–perhaps most–people are better at one skill than another. This effect is generally small–we’re talking about people who get A+ in English and only B+s in math, not people who get A+ in English but Fs in math.
The effect may be more pronounced for people at the extremes of high-IQ–that is, someone who is three standard deviations above the norm in math may be only slightly above average in verbal, and vice versa–but professional authors are not generally innumerate, nor are mathematicians and scientists unable to read and write. (In fact, their professions require constantly writing papers for publication and reading the papers published by their colleagues.)
All forms of “intelligence” probably rely, at a basic level, on bodily well-being. Your brain is a physical object inside your body, and if you do not have the material bits necessary for well-being, your brain will suffer. When you haven’t slept in a long time, your ability to think goes down the tubes. If you haven’t eaten in several days (or perhaps just this morning), you will find it difficult to think. If you are sick or in pain, again, you will have trouble thinking.
Healthy people have an easier time thinking, and this applies across the board to all forms of thought–mathematical, verbal, emotional, kinesthetic, musical, etc.
“Health” here doesn’t just include things we normally associate with it, like eating enough vegetables and swearing to the dentist that this time, you’re really going to floss. It probably also includes minute genetic variations in how efficient your body is at building and repairing tissues; chemicals or viruses you were exposed to in-utero; epigenetics, etc.
So where does this notion that math and science are better than English and feelings come from, anyway?
A. Math (and science) are disciplines with (fairly) objective answers. If I ask you, “What’s 2+2?” we can determine pretty easily whether you got it correct. This makes mathematical ability difficult to fudge and easy to verify.
Verbal disciplines, by contrast, are notoriously fuzzy:
riverrun, past Eve and Adam’s, from swerve of shore to bend
I scowl with frustration at myself in the mirror. Damn my hair – it just won’t behave, and damn Katherine Kavanagh for being ill and subjecting me to this ordeal. I should be studying for my final exams, which are next week, yet here I am trying to brush my hair into submission. I must not sleep with it wet. I must not sleep with it wet. Reciting this mantra several times, I attempt, once more, to bring it under control with the brush. I roll my eyes in exasperation and gaze at the pale, brown-haired girl with blue eyes too big for her face staring back at me, and give up. My only option is to restrain my wayward hair in a ponytail and hope that I look semi presentable.
Best-seller, or Mary Sue dreck?
And what does this mean:
Within that conflictual economy of colonial discourse which Edward Said describes as the tension between the synchronic panoptical vision of domination – the demand for identity, stasis – and the counterpressure of the diachrony of history – change, difference – mimicry represents an ironic compromise. If I may adapt Samuel Weber’s formulation of the marginalizing vision of castration, then colonial mimicry is the desire for a reformed, recognizable Other, as a subject of a difference that is almost the same, but not quite. Which is to say, that the discourse of mimicry is constructed around an ambivalence; in order to be effective, mimicry must continually produce its slippage, its excess, its difference. (source)
If we’re going to argue about who’s smartest, it’s much easier if we can assign a number to everyone and declare that the person with the biggest number wins. The SAT makes a valiant effort at quantifying verbal knowledge like the number of words you can accurately use, but it is very hard to articulate what makes a text so great that Harvard University would hire the guy who wrote it.
B. The products of science have immediately obvious, useful applications, while the products of verbal abilities appear more superficial and superfluous.
Where would we be today without the polio vaccine, internal combustion engines, or the transistor? What language would we be writing in if no one had cracked the Enigma code, or if the Nazis had not made Albert Einstein a persona non grata? How many of us used computers, TVs, or microwaves? And let’s not forget all of the science that has gone into breeding and raising massively more caloric strains of wheat, corn, chicken, beef, etc., to assuage the world’s hunger.
We now live in a country where too much food is our greatest health problem!
If I had to pick between the polio vaccine and War and Peace, I’d pick the vaccine, even if every minute spent with Tolstoy is a minute of happiness. (Except when *spoilers spoilers* and then I cry.)
But literature is not the only product of verbal ability; we wouldn’t be able to tell other people about our scientific discoveries if it weren’t for language.
Highly verbal people are good at communication and so help keep the gears of modern society turning, which is probably why La Griffe du Lion found that national per capita GDP correlated more closely with verbal IQ scores than composite or mathematical scores.
Of course, as noted, these scores are highly correlated–so the whole business is really kind of moot.
So where does this notion come from?
In reality, high-verbal people tend to be more respected and better paid than high-math people. No, not novelists–novelists get paid crap. But average pay for lawyers–high verbal–is much better than average pay for mathematicians. Scientists are poorly paid compared to other folks with similar IQs and do badly on the dating market; normal people frequently bond over their lack of math ability.
“Math is hard. Let’s go shopping!” — Barbie
Even at the elementary level, math and science are given short shrift. How many schools have a “library” for math and science exploration in the same way they have a “library” for books? I have seen the lower elementary curriculum; kindergarteners are expected to read small books and write full sentences, but by the end of the year, they are only expected to count to 20 and add/subtract numbers up to 5. (eg, 1+4, 2+3, 3-2, etc.)
The claim that math/science abilities are more important than verbal abilities probably stems primarily from high-math/science people who recognize their fields’ contributions to so many important parts of modern life and are annoyed (or angry) about the lack of recognition they receive.
As we were discussing yesterday, I theorize that people have neural feedback loops that reward them for conforming/imitating others/obeying authorities and punish them for disobeying/not conforming.
This leads people to obey authorities or go along with groups even when they know, logically, that they shouldn’t.
There are certainly many situations in which we want people to conform even though they don’t want to, like when my kids have to go to bed or buckle their seatbelts–as I said yesterday, the feedback loop exists because it is useful.
But there are plenty of situations where we don’t want people to conform, like when trying to brainstorm new ideas.
Under what conditions will people disobey authority?
But in person, people may disobey authorities when they have some other social systtem to fall back on. If disobeying an authority in Society A means I lose social status in Society A, I will be more likely to disobey if I am a member in good standing in Society B.
If I can use my disobedience against Authority A as social leverage to increase my standing in Society B, then I am all the more likely to disobey. A person who can effectively stand up to an authority figure without getting punished must be, our brains reason, a powerful person, an authority in their own right.
Teenagers do this all the time, using their defiance against adults, school, teachers, and society in general to curry higher social status among other teenagers, the people they actually care about impressing.
SJWs do this, too:
I normally consider the president of Princeton an authority figure, and even though I probably disagree with him on far more political matters than these students do, I’d be highly unlikely to be rude to him in real life–especially if I were a student he could get expelled from college.
But if I had an outside audience–Society B–clapping and cheering for me behind the scenes, the urge to obey would be weaker. And if yelling at the President of Princeton could guarantee me high social status, approval, job offers, etc., then there’s a good chance I’d do it.
But then I got to thinking: Are there any circumstances under which these students would have accepted the president’s authority?
Obviously if the man had a proven track record of competently performing a particular skill the students wished to learn, they might follow hi example.
Or not.
If authority works via neural feedback loops, employing some form of “mirror neurons,” do these systems activate more strongly when the people we are perceiving look more like ourselves (or our internalized notion of people in our “tribe” look like, since mirrors are a recent invention)?
In other words, what would a cross-racial version of the Milgram experiment look like?
Unfortunately, it doesn’t look like anyone has tried it (and to do it properly, it’d need to be a big experiment, involving several “scientists” of different races [so that the study isn’t biased by one “scientist” just being bad at projecting authority] interacting with dozens of students of different races, which would be a rather large undertaking.) I’m also not finding any studies on cross-racial authority (I did find plenty of websites offering practical advice about different groups’ leadership styles,) though I’m sure someone has studied it.
However, I did find cross-racial experiments on empathy, which may involve the same brain systems, and so are suggestive:
Using transcranial magnetic stimulation, we explored sensorimotor empathic brain responses in black and white individuals who exhibited implicit but not explicit ingroup preference and race-specific autonomic reactivity. We found that observing the pain of ingroup models inhibited the onlookers’ corticospinal system as if they were feeling the pain. Both black and white individuals exhibited empathic reactivity also when viewing the pain of stranger, very unfamiliar, violet-hand models. By contrast, no vicarious mapping of the pain of individuals culturally marked as outgroup members on the basis of their skin color was found. Importantly, group-specific lack of empathic reactivity was higher in the onlookers who exhibited stronger implicit racial bias.
Using the event-related potential (ERP) approach, we tracked the time-course of white participants’ empathic reactions to white (own-race) and black (other-race) faces displayed in a painful condition (i.e. with a needle penetrating the skin) and in a nonpainful condition (i.e. with Q-tip touching the skin). In a 280–340 ms time-window, neural responses to the pain of own-race individuals under needle penetration conditions were amplified relative to neural responses to the pain of other-race individuals displayed under analogous conditions.
In this study, we used functional magnetic resonance imaging (fMRI) to investigate how people perceive the actions of in-group and out-group members, and how their biased view in favor of own team members manifests itself in the brain. We divided participants into two teams and had them judge the relative speeds of hand actions performed by an in-group and an out-group member in a competitive situation. Participants judged hand actions performed by in-group members as being faster than those of out-group members, even when the two actions were performed at physically identical speeds. In an additional fMRI experiment, we showed that, contrary to common belief, such skewed impressions arise from a subtle bias in perception and associated brain activity rather than decision-making processes, and that this bias develops rapidly and involuntarily as a consequence of group affiliation. Our findings suggest that the neural mechanisms that underlie human perception are shaped by social context.
None of these studies shows definitevely whether or not in-group vs. out-group biases are an inherent feature of neurological systems, or Avenanti’s finding that people were more empathetic toward a purple-skinned person than to a member of a racial out-group suggests that some amount of learning is involved in the process–and that rather than comparing people against one’s in-group, we may be comparing them against our out-group.
At any rate, you may get similar outcomes either way.
In cases where you want to promote group cohesion and obedience, it may be beneficial to sort people by self-identity.
In cases where you want to guard against groupthink, obedience, or conformity, it may be beneficial to mix up the groups. Intellectual diversity is great, but even ethnic diversity may help people resist defaulting to obedience, especially when they know they shouldn’t.
Using data from two panel studies on U.S. firms and an online experiment, we examine investor reactions to increases in board diversity. Contrary to conventional wisdom, we find that appointing female directors has no impact on objective measures of performance, such as ROA, but does result in a systematic decrease in market value.
(Solal argues that investors may perceive the hiring of women–even competent ones–as a sign that the company is pursuing social justice goals instead of money-making goals and dump the stock.)
Additionally, diverse companies may find it difficult to work together toward a common goal–there is a good quantity of evidence that increasing diversity decreases trust and inhibits group cohesion. EG, from The downside of diversity:
IT HAS BECOME increasingly popular to speak of racial and ethnic diversity as a civic strength. From multicultural festivals to pronouncements from political leaders, the message is the same: our differences make us stronger.
But a massive new study, based on detailed interviews of nearly 30,000 people across America, has concluded just the opposite. Harvard political scientist Robert Putnam — famous for “Bowling Alone,” his 2000 book on declining civic engagement — has found that the greater the diversity in a community, the fewer people vote and the less they volunteer, the less they give to charity and work on community projects. In the most diverse communities, neighbors trust one another about half as much as they do in the most homogenous settings. The study, the largest ever on civic engagement in America, found that virtually all measures of civic health are lower in more diverse settings.
As usual, I suspect there is an optimum level of diversity–depending on a group’s purpose and its members’ preferences–that helps minimize groupthink while still preserving most of the benefits of cohesion.
So I was thinking the other day about the question of why do people go along with others and do things even when they know they believe (or know) they shouldn’t. As Tolstoy asks, why did the French army go along with this mad idea to invade Russia in 1812? Why did Milgram’s subjects obey his orders to “electrocute” people? Why do I feel emotionally distressed when refusing to do something, even when I have very good reasons to refuse?
As I mentioned ages ago, I suspect that normal people have neural circuits that reward them for imitating others and punish them for failing to imitate. Mirror neurons probably play a critical role in this process, but probably aren’t the complete story.
These feedback loops are critical for learning–infants only a few months old begin the process of learning to talk by moving their mouths and making “ba ba” noises in imitation of their parents. (Hence why it is called “babbling.”) They do not consciously say to themselves, “let me try to communicate with the big people by making their noises;” they just automatically move their faces to match the faces you make at them. It’s an instinct.
You probably do this, too. Just watch what happens when one person in a room yawns and then everyone else feels compelled to do it, too. Or if you suddenly turn and look at something behind the group of people you’re with–others will likely turn and look, too.
Autistic infants have trouble with imitation, (and according to Wikipedia, several studies have found abnormalities in their mirror neuron systems, though I suspect the matter is far from settled–among other things, I am not convinced that everyone with an ASD diagnosis actually has the same thing going on.) Nevertheless, there is probably a direct link between autistic infants’ difficulties with imitation and their difficulties learning to talk.
For adults, imitation is less critical (you can, after all, consciously decide to learn a new language,) but still important for survival. If everyone in your village drinks out of one well and avoids the other well, even if no one can explain why, it’s probably a good idea to go along and only drink out of the “good” well. Something pretty bad probably happened to the last guy who drank out of the “bad” well, otherwise the entire village wouldn’t have stopped drinking out of it. If you’re out picking berries with your friends when suddenly one of them runs by yelling “Tiger!” you don’t want to stand there and yell, “Are you sure?” You want to imitate them, and fast.
Highly non-conformist people probably have “defective” or low-functioning feedback loops. They simply feel less compulsion to imitate others–it doesn’t even occur to them to imitate others! These folks might die in interesting ways, but in the meanwhile, they’re good sources for ideas other people just wouldn’t have thought of. I suspect they are concentrated in the arts, though clearly some of them are in programming.
Normal people’s feedback loops kick in when they are not imitating others around them, making them feel embarrassed, awkward, or guilty. When they imitate others, their brains reward them, making them feel happy. This leads people to enjoy a variety of group-based activities, from football games to prayer circles to line dancing to political rallies.
Normal people having fun by synchronizing their bodily movements.
At its extreme, these groups become “mobs,” committing violent acts that many of the folks involved wouldn’t under normal circumstances.
Highly conformist people’s feedback loops are probably over-active, making them feel awkward or uncomfortable while simply observing other people not imitating the group. This discomfort can only be relieved by getting those other people to conform. These folks tend to favor more restrictive social policies and can’t understand why other people would possibly want to do those horrible, non-conforming things.
To reiterate: this feedback system exists because helped your ancestors survive. It is not people being “sheep;” it is a perfectly sensible approach to learning about the world and avoiding dangers. And different people have stronger or weaker feedback loops, resulting in more or less instinctual desire to go along with and imitate others.
However, there are times when you shouldn’t imitate others. Times when, in fact, everyone else is wrong.
The Milgram Experiment places the subject in a situation where their instinct to obey the experimenter (an “authority figure”) is in conflict with their rational desire not to harm others (and their instinctual empathizing with the person being “electrocuted.”)
In case you have forgotten the Milgram Experiment, it went like this: an unaware subject is brought into the lab, where he meets the “scientist” and a “student,” who are really in cahoots. The subject is told that he is going to assist with an experiment to see whether administering electric shocks to the “student” will make him learn faster. The “student” also tells the student, in confidence, that he has a heart condition.
The real experiment is to see if the subject will shock the “student” to death at the “scientist’s” urging.
No actual shocks are administered, but the “student” is a good actor, making out that he is in terrible pain and then suddenly going silent, etc.
Before the experiment, Milgram polled various people, both students and “experts” in psychology, and pretty much everyone agreed that virtually no one would administer all of the shocks, even when pressured by the “scientist.”
In Milgram’s first set of experiments, 65 percent (26 of 40) of experiment participants administered the experiment’s final massive 450-volt shock,[1] though many were very uncomfortable doing so; at some point, every participant paused and questioned the experiment; some said they would refund the money they were paid for participating in the experiment. Throughout the experiment, subjects displayed varying degrees of tension and stress. Subjects were sweating, trembling, stuttering, biting their lips, groaning, digging their fingernails into their skin, and some were even having nervous laughing fits or seizures. (bold mine)
I’m skeptical about the seizures, but the rest sounds about right. Resisting one’s own instinctual desire to obey–or putting the desire to obey in conflict with one’s other desires–creates great emotional discomfort.
So much so, that it feels really dickish to point out that dogs aren’t actually humans and we don’t actually treat them like full family members. Maybe this is just the American difficulty with shades of gray, where such an argument is seen as the moral equivalent of eating puppies for breakfast, or maybe extreme dog affection is an instinctual mental trait of healthy people, and so only abnormal weirdos claim that it sounds irrational.
As we discussed yesterday, pet ownership is normal (in that the majority of Americans own pets,) and pet owners themselves are disproportionately married suburbanites with children. However, pet ownership is also somewhat exceptional, in that Americans–particularly American whites–appear globally unique in their high degree of affection for pets.
Incidentally, 76% of dog owners have bought Christmas presents for their dogs. (I’ve even done this.)
Why do people love dogs (and other pets) so much?
The Wikipedia cites a couple of theories, eg:
Wilson’s (1984) biophilia hypothesis is based on the premise that our attachment to and interest in animals stems from the strong possibility that human survival was partly dependent on signals from animals in the environment indicating safety or threat. The biophilia hypothesis suggests that now, if we see animals at rest or in a peaceful state, this may signal to us safety, security and feelings of well-being which in turn may trigger a state where personal change and healing are possible.
Since I tend to feel overwhelmingly happy and joyful while walking in the woods, I understand where this theory comes from, but it doesn’t explain why suburban white parents like pets more than, say, single Chinese men, or why hunter-gatherers (or recently settled hunter-gatherers) aren’t the most avid pet-owners (you would think hunter-gatherers would be particularly in tune with the states of the animals around them!)
So I propose a different theory:
Pets are (mostly) toy versions of domestic animals.
Europeans–and Americans–have traditionally been engaged in small-scale farming and animal husbandry, raising chickens, pigs, cattle, horses, sheep, and occasionally goats, geese, turkeys, and ducks.
Dogs and cats held a special place on the farm. Dogs were an indispensable part of their operations, both to protect the animals and help round them up, and worked closely with the humans in farm management. Much has been written on the relationship between the shepherd and his sheep, but let us not overlook the relationship between the shepherd and his dog.
Cats also did their part, by eliminating the vermin that were attracted to the farmer’s grain.
These dogs and cats are still “working” animals rather than “pets” kept solely for their company, but they clearly enjoy a special status in the farmer’s world, helpers rather than food.
For children, raising “pets” teaches valuable sills necessary for caring for larger animals–better to make your learning mistakes when the only one dependent on you is a hamster than when it’s a whole flock of sheep and your family’s entire livelihood.
Raising pets provides an additional benefit in creating the bond between a child and dog that will eventually transform into the working relationship between farmer and farm-dog.
Empathy has probably played an important role in animal domestication–the ability to understand the animal’s point of view and care about its well being probably helps a lot when trying to raise it from infancy to adulthood. People with higher levels of empathy may have been better at domesticating animals in the first place, and living in an economy dependent on animal husbandry may have also selected for people with high levels of empathy.
In other words, people who treated their dogs well have probably been more evolutionarily successful than people who didn’t, pushing us toward instinctually treating dogs like one of the family. (Though I still think that people who sell cancer treatments for cats and dogs are taking advantage of gullible pet owners and that actually treating an animal just like a human is a bad idea. I also find it distasteful to speak of adopted dogs finding their “forever homes,” a phrase lifted from human adoption.)
However, if you’ve ever interacted with humans, you’ve probably noticed by now that some would give their dog their right kidney, and some would set a dog on fire without blinking.
(I am reminded here of the passage in Phillipe Bourgois’s In Search of Respect in which the anthropologist is shocked to discover that violent Nuyorican crack dealers think torturing animals is funny.)
I have been looking for a map showing the historical distribution of domesticated animals in different parts of the globe, but have so far failed. I’d be most grateful if anyone can find one. To speak very generally, Australia historically had no domesticated animals, South America had llamas, North America had dogs, African hunter-gatherers didn’t have any, African horticulturalists had a chicken-like animal, and then Europe/Asia/The Middle East/India/other Africans had a large variety of animals, like camels and yaks and horses and goats.
…a deletion variant of the ADRA2b gene. Carriers remember emotionally arousing images more vividly and for a longer time, and they also show more activation of the amygdala when viewing such images (Todd and Anderson, 2009; Todd et al., 2015). … Among the Shors, a Turkic people of Siberia, the incidence was 73%. Curiously, the incidence was higher in men (79%) than in women (69%). It may be that male non-carriers had a higher death rate, since the incidence increased with age (Mulerova et al., 2015). … The picture is still incomplete but the incidence of the ADRA2b deletion variant seems to range from a low of 10% in some sub-Saharan African groups to a high of 50-65% in some European groups and 55-75% in some East Asian groups. Given the high values for East Asians, I suspect this variant is not a marker for affective empathy per se but rather for empathy in general (cognitive and affective). [source]
The Shors are a small, formerly semi-nomadic group from Siberia. I haven’t found out much about them, but I bet they had dogs, like other Siberian groups.
Frost hypothesizes that extensive empathy developed as part of the suit of mental traits that made life possible in large communities of bronze-age hunter-gatherers along the Baltic:
This weak kinship zone may have arisen in prehistory along the coasts of the North Sea and the Baltic, which were once home to a unique Mesolithic culture (Price, 1991). An abundance of marine resources enabled hunter-fisher-gatherers to achieve high population densities by congregating each year in large coastal agglomerations for fishing, sealing, and shellfish collecting. Population densities were comparable in fact to those of farming societies, but unlike the latter there was much “churning” because these agglomerations formed and reformed on a yearly basis. Kinship obligations would have been insufficient to resolve disputes peaceably, to manage shared resources, and to ensure respect for social rules. Initially, peer pressure was probably used to get people to see things from the other person’s perspective. Over time, however, the pressure of natural selection would have favored individuals who more readily felt this equivalence of perspectives, the result being a progressive hardwiring of compassion and shame and their gradual transformation into empathy and guilt (Frost, 2013a; Frost, 2013b).
Empathy and guilt are brutally effective ways to enforce social rules. If one disobeys these internal overseers, the result is self-punishment that passes through three stages: anguish, depression and, ultimately, suicidal ideation. [source]
Someone has been reading a lot of Dostoyevsky. But I’m wondering if the first ingredient is actually farming/animal husbandry.
To sum:
1. People with high levels of empathy may have had an easier time domesticating animals/raising domesticated animals, creating a feedback loop of increasing empathy in farming populations.
2. This empathetic connection was strongest with dogs and cats, who aren’t meat to be slaughtered but human partners.
3. Children assigned the task of raising dogs and cats bonded with their charges.
4. Modern “pets” are (living) toy versions of the working dogs and cats who once helped manage the farms.
Poll time!
1. Do you have a pet?
2. Do you think pets should be treated like family members/humans?
3. Would you shoot your pet for a million dollars?
A. Never!
B. Yes, but I would use the money to raise 100 abandoned animals out of suffering.
C. Yes.
D. That’s a terrible question! What kind of sick fuck makes up a question like that?
Don’t get me wrong. I like animals; I just don’t like them in my house. Every time I petsit for friends with cats, I am reminded of why I don’t own cats: scooping feces is repulsive (and don’t get me started on toxoplasma Gondii!) Dogs are marginally better, in that the homes of dog owners don’t always smell of feces, but unfortunately they often smell of dog.
For this post, I am defining “pet” as animals that people keep solely for companionship. Animals kept because they do useful things or materially benefit their owners, like seeing eye dogs, egg-laying chickens, mouse-hunting cats, race horses, or dancing bears are not “pets.” Medical “therapy animals” are basically pets. It makes plenty of sense for people to keep around work animals, but pets seem to be kept around simply for the enjoyment of their company.
According to Wikipedia, Americans own approximately 94 million cats, 78 million dogs, 172 million fish, and 45 million small mammals, fish, reptiles, etc. (Though of course some of these are “useful” animals that I wouldn’t count.) This comes out to about 4x as many pets as children, concentrated in 60% of the households (most pet owners have more than one.)
Pets cost quite a bit of money–the average small dog costs about $7,000 to $13,000 over its 14 year lifespan; the average large dog costs $6,000 to $8,000 over its much shorter 8 year lifespan. [source] (Note that it is cheaper per year to own a small dog; the lower lifetime cost is due entirely to their shorter lifespans.) Cats cost about the same as dogs–people don’t spend much on “outdoor” cats, but “indoor” cats cost about $9,000 to $11,000 over their lifetimes.
Just making some rough estimates, I’d say it looks people spend $700 per year per dog or cat, which comes out to about 120 billion dollars per year. That’s a lot of money! (And this doesn’t count the expenses incurred by shelters and animal control agencies to take care of the excess pets people don’t want.)
Americans are probably exception in the number of pets they have. According to Wikipedia, 46% of the world’s pet dog population lives in the US. (By contrast, only 4.4% of the world’s human population lives in the US.) The ratio gets even more skewed if we break it down by race–63% of America’s whites own pets, versus only 49% of the non-whites. [source]
However, other countries similar to the US don’t seem as keen on pets: the %pets/%people ratio for the US is 10.5, for Canada 7.5, and for Britain, 5.8. This might have to do with factors like Britain being a more crowded country where people have less space for pets, or with the Wikipedia data being inaccurate. Either way, I think it’s safe to say that pets are very characteristically American, and especially a white American thing.
One theory about why people own so many pets is that they’re substitute children/companions/friends for lonely people who don’t have kids/spouses/friends, perhaps as a side effect of our highly atomized culture. I came into this post expecting to confirm this, but it looks like Crazy Cat Ladies are actually a relatively small percent of the overall pet-owning population.
According to Gallop, 50% of married people own a dog, and 33% own a cat (some people own both.) By contrast, only 37% of unmarried people own dogs and only 25% own cats. People with children under 18 are more likely to own pets than people without. And people from the “East” are less likely to own pets than people from the “West.” (Interestingly, “westerners” are disproportionately more likely to own cats.)
So it looks to me like most pet ownership is actually motivated by the idea that kids should have pets, with pets more common in suburban or rural areas where they have more room to run around. This is probably particularly so for cats, who are probably more likely to be “outdoor” pets or mouse-catching farm cats in rural areas (ie, the “West.”)
There is an extensive belief–perhaps folk belief–that pet ownership is good for people. Gallop found that 60% of people believe that pet owners lead more satisfying lives than non-pet owners; numerous studies claim that pet ownership–or even just occasional interaction–makes people healthier. There even exists an “animal therapy” industry. Unfortunately, the studies on the subject look rather unreliable–the ones about pet ownership are confounded by healthier people being more likely to have pets in the first place, for example.
And yet, there’s something about the notion that I find appealing; something about playing with happy puppies or petting a bunny that I find downright pleasant. Maybe it’s something as simple as animals being nice and therefore making people happy.
It’s getting late, so I’ll continue this tomorrow.
One of the subjects people care most about in ev psych and related disciplines is intelligence. Teachers would love it if all of their students suddenly began scoring in the 90th %; so would parents, of course. Tons of psychological studies have been done on subjects like “Do people score better on tests after thinking about famous scientists,” (without finding much useful,) not to mention millions of dollars spent on education reform without, as far as I can tell, much real change in school performances.
Since “IQ”–our best attempt at measuring and quantifying intelligence–appears to be at least 50% genetic, genes are a good spot to look when attempting to unravel the mystery of genius.
One of my theories on the subject is that if there are two kinds of dumb, perhaps there are two kinds of smart. Obviously dropping someone on their head is probably not going to result in genius, but perhaps there are some people who are smart due to having the good luck to have a variety of genes that generally code for things leading to high IQ, while other people are smart because they have a few particular genes or mutations. The folks with the generally IQ-boosting all-around genes are people who come from a background of parents and extended families with similar IQs to themselves, but folks with rare, particular, or novel mutations/genes would likely stand out even from their families. Such genes might have deleterious side effects or only confer genius in one or two particular arenas, resulting in, say, the stereotypical absent-minded professor or idiot savants.
If genius is fragile–my definition of fragile, not necessarily anyone else’s–then it is easily damaged; the difference between high-IQ and low-IQ in a particular population will be related to the possession of deleterious mutations that damage IQ. If IQ is not fragile–that is, if it is robust–then we would find rare, beneficial genes that boost IQ.
Environmentally, it is already obvious that genius is fragile–that is, it is much easier to drop someone one their head and subtract 40 IQ points than to find any intervention that will reliably add 40 points, but this does not necessarily preclude a variety of interesting genetic findings.
Perhaps I am thinking about this all wrong, but that’s the structure I’ve got worked out so far.
Anyway, so people have been searching for genes linked to IQ. Will they find specific IQ-boosting genes that highly intelligent people have, but dump people don’t? Or will they find specific IQ-damaging genes that dumb people have but intelligent people don’t? (Or maybe a combination of both?)
So, Neuroscience News recently covered a study published in Molecular Psychology that looked at genetic differences between highly intelligent people and the general population.
Now, I’m going to have to stop and point out a potential design flaw, at least according to the article:
“Published today in Molecular Psychiatry, the King’s College London study selected 1,400 high-intelligence individuals from the Duke University Talent Identification Program. Representing the top 0.03 per cent of the ‘intelligence distribution’, these individuals have an IQ of 170 or more – substantially higher than that of Nobel Prize winners, who have an average IQ of around 145.”
Duke TIP is aimed at middle schoolers, based largely on their elementary school test scores Anything that starts out by comparing the IQs of elementary school kids to people who’ve already won Nobel Prizes may not be saying much.
Second, I’d just like to note that while the article is unclear, they are probably not claiming that all Duke TIP participants have IQs over 170, since they don’t–Duke TIP’s own website states that they only require IQ scores over 125. Rather, I suspect they used the test scores submitted to the TIP program to select students with IQs over 170. If some confusion has occurred and they actually used people with 125s, well, results may not be as claimed.
Quick rough calculations indicate that 1,400 people in the top 0.03% is not an unreasonable number, since it would only require 4.667 million people, and there are about 4 million kids per grade level in the US, TIP takes from multiple grades, and they could have used multiple years’ worth of participants. But I don’t know how many kids TIP takes each year.
Anyway, results:
“The study focused, for the first time, on rare, functional SNPs – rare because previous research had only considered common SNPs and functional because these are SNPs that are likely to cause differences in the creation of proteins.
“The researchers did not find any individual protein-altering SNPs that met strict criteria for differences between the high-intelligence group and the control group. However, for SNPs that showed some difference between the groups, the rare allele was less frequently observed in the high intelligence group. This observation is consistent with research indicating that rare functional alleles are more often detrimental than beneficial to intelligence. …
‘Rare functional alleles do not account for much on their own but in combination, their impact is significant.
‘Our research shows that there are not genes for genius. However, to have super-high intelligence you need to have many of the positive alleles and importantly few of the negative rare effects, such as the rare functional alleles identified in our study.’
Or as the abstract puts it:
We did not observe any individual protein-altering variants that are reproducibly associated with extremely high intelligence and within the entire distribution of intelligence.* Moreover, no significant associations were found for multiple rare alleles within individual genes. However, analyses using genome-wide similarity between unrelated individuals (genome-wide complex trait analysis) indicate that the genotyped functional protein-altering variation yields a heritability estimate of 17.4% (s.e. 1.7%) based on a liability model. In addition, investigation of nominally significant associations revealed fewer rare alleles associated with extremely high intelligence than would be expected under the null hypothesis. This observation is consistent with the hypothesis that rare functional alleles are more frequently detrimental than beneficial to intelligence.
*What does “and within the entire distribution of intelligence” mean in this sentence?
To be honest, I’m not sure about the interpretation that only genetic differences between high IQ and low IQ people is that the low-IQ have more deleterious mutations and the high-IQ don’t. For starters, we observe ethnic variation in IQ scores, and I find it difficult to believe that vast swathes of the planet, some of which have very different marriage patterns, have abnormally high levels of deleterious, fitness-reducing mutations that other swathes of the planet don’t.
I certainly can believe, though, that there are deleterious mutations that reduce IQ.