One of my kids enjoys watching YouTube cooking videos, and they’re nearly 100% women making cakes.
Women’s magazines focus exclusively on 4 topics: men, fashion, diets, and cupcakes. You might think that diets and cupcakes are incompatible, but women’s magazines believe otherwise:
Just in case it’s not clear, that is not a watermellon. It is cake, cleverly disguised as a watermellon.
(YouTube has videos that show you how to make much better cake watermellons–for starters, you want red velvet cake for the middle, not just frosting…)
Magazines specifically aimed at “people who want to make cakes” are also overwhelmingly feminine. Whether we’re talking wedding cakes or chocolate cravings, apple pastries or donuts, sweets and women just seem to go together.
If men’s magazines ever feature food, I bet they’re steak and BBQ. (*Image searches*)
The meat-related articles do appear to be a little more gender-neutral than the cupcake-related articles–probably because men don’t tend to decorate their steaks with tiny baseball bats cut out of steak the way women like to decorate their cakes with tiny flowers made out of frosting.
It’s almost as if women have some kind of overwhelming craving for fats and sugars that men don’t really share.
I was talking with a friend recently about their workplace, where, “All of the women are on diets, but none of them can stay on their diets because they are all constantly eating at their workstations.” Further inquiries revealed that yes, they are eating sweets and pastries, not cashews and carrots, and that there is some kind of “office culture” of all of the women eating pastries together.
The irony here is pretty obvious.
Even many (most?) specialty “diet” foods are designed to still taste sweet. “Fat-free” yogurt is marketed as a health food even though it has as much sugar in it as a bowl of ice cream. Women are so attracted to the taste of sweet sodas, they drink disgusting Diet Coke. Dieting websites advise us that cake topped with fruit is “healthy.”
When men diet, they think “eat nothing but protein until ketosis kicks in” sounds like a great idea. When women diet, they want fat-free icecream.
I don’t think it is just “women lack willpower.” (Or at least, not willpower in the sense of something people have much control over.) Rather, I think that men and women actually have substantially different food cravings.
So do children, for that matter.
Throughout most of human history, from hunter-gatherers to agriculturalists, the vast majority of women have specialized in obtaining (gathering, tending, harvesting,) plants. (The only exceptions are societies where people don’t eat plants, like the Inuit and the Masai, and our modern society, where most of us aren’t involved in food production.) By contrast, men have specialized in hunting, raising, and butchering animals–not because they were trying to hog the protein or had some sexist ideas about food production, but because animals tend to be bigger and heavier than women can easily lift. Dragging home and butchering large game requires significant strength.
I am inventing a “Just So” story, of course. But it seems sensible enough that each gender evolved a tendency to crave the particular kinds of foods it was most adept at obtaining.
Exercise wears down muscles; protein is necessary to build them back up. Protein fuels active lifestyles, and active lifestyles, in turn, require protein. Our male ancestors’ most important activities were most likely heavy labor (eg, building huts, hauling firewood, butchering game,) and defending the tribe. Our female ancestors’ most important activities were giving birth and nursing children (we would not exist had they not, after all.) For these activities, women want to be fat. It’s not good enough to put on weight after you get pregnant, when the growing fetus is already dependent on its mother for nutrients. Far better for a woman to be plump before she gets pregnant (and to stay that way long after.)
Of course, this is “fat” by historical standards, not modern American standards.
I suspect, therefore, that women are naturally inclined to eat as much as possible of sweet foods in order to put on weight in preparation for pregnancy and lactation–only today, the average woman has 2 pregnancies instead of 12, and so instead of turning that extra weight into children and milk, it just builds up.
Obviously we are talking about a relatively small effect on food preferences, both because our ancestors could not afford to be too picky about what they ate, and because the genetic difference between men and women is slight–not like the difference between humans and lizards, say.
Interestingly, gender expression in humans appears to basically be female by default. If, by random chance, you are born with only one X chromosome, (instead of the normal XX or XY,) you can still survive. Sure, you’ll be short, you probably won’t menstruate, and you’ll likely have a variety of other issues, but you’ll be alive. By contrast, if you received only a Y chromosome from your parents and no accompanying X, you wouldn’t be here reading this post. You can’t survive with just a Y. Too many necessary proteins are encoded on the X.
Gender differences show up even in fetuses, but don’t become a huge deal until puberty, when the production of androgens and estrogens really cranks up.
Take muscle development: muscle development relies on the production of androgens (eg, testosterone.) Grownups produce more androgens than small children, and men produce more than women. Children can exercise and certainly children who do daily farm chores are stronger than children who sit on their butts watching TV all day, but children can’t do intense strength-training because they just don’t produce enough androgens to build big muscles. Women, likewise, produce fewer androgens, and so cannot build muscles at the same rate as men, though obviously they are stronger than children.
At puberty, boys begin producing the androgens that allow them to build muscles and become significantly stronger than girls.
Sans androgens, even XY people develop as female. (See Androgen Insensitivity Syndrome, in which people with XY chromosomes cannot absorb the androgens their bodies create, and so develop as female.) Children produce some androgens (obviously,) but not nearly as many as adults. Pre-pubescent boys, therefore, are more “feminine,” biologically, than post-pubescent men; puberty induces maleness.
All children seem pretty much obsessed with sweets, far more than adults. If allowed, they will happily eat cake until they vomit.
Even though food seems like a realm where evolution would heavily influence our tastes, it’s pretty obvious that culture has a huge effect. I doubt Jews have a natural aversion to pork or Hindus to beef. Whether you think chicken hearts are tasty or vomitous is almost entirely dependent on whether or not they are a common food in your culture.
But small children are blissfully less attuned to culture than grownups. Like little id machines, they spit out strained peas and throw them on the floor. They do not care about our notion that “vegetables are good for you.” This from someone who’ll eat bird poop if you let them.
The child’s affection for sweets, therefore, I suspect is completely natural and instinctual. Before the invention of refined sugars and modern food distribution systems, it probably kept them alive and healthy. Remember that the whole reason grownups try to eat more vegetables is that vegetables are low in calories. Grownups have larger stomachs and so can eat more than children, allowing them to extract adequate calories from low-calorie foods, but small children do not and cannot. In developing countries, children still have trouble getting enough calories despite abundant food in areas where that food is low-calorie plants, which they just cannot physically eat enough of. Children, therefore, are obsessed with high-calorie foods.
At puberty, this instinct changes for boys–orienting them more toward protein sources, which they are going to have to expend a lot of energy trying to haul back to their families for the rest of their lives, but stays basically unchanged in females.
ETA: I have found two more sources/items of relevance:
When it comes to what we eat, men and women behave differently: Men consume more beef, eggs, and poultry; while women eat more fruits and vegetables and consume less fat than do men. … The gender differences in preferences for healthier foods begin in childhood. Previous literature has found that girls choose healthier food and are fonder of fruits and vegetables than are boys. Boys rated beef, processed meat, and eggs as more desirable than did girls. …
Sensory (taste) differences between the genders are the second most widely ventured explanation for the differences in food choices, although it is not clear that such genetic differences actually exist. While the popular media argue that females prefer sweetness and dislike bitterness, while males may enjoy bitterness, academic literature on this matter is less conclusive. The bitter taste receptor, gene TAS2R38, has been associated with the ability to taste PROP (6-n-propylthiouracil),
one source of genetic variation in PROP and PTC taste. Individuals who experience bitterness strongly are assumed to also experience sweetness strongly relative to those who experience PROP as only slightly bitter. While previous studies found that inherited taste-blindness to bitter compounds such as PROP may be a risk factor for obesity, this literature has been hotly disputed.
The distribution of perceived bitterness of PROP differs among women and men, as does the correlation between genetic taste measures and acceptance of sweetness. A higher percentage of women are PROP and PTC tasters, sensing bitterness above threshold. It has been suggested that women are more likely to be supertasters, or those who taste with far greater intensity than average.
(I have removed the in-line citations for ease of reading; please refer to the original if you want them.)
Well, I don’t remember where this graph came from, but it looks like my intuitions were pretty good. males and females both have very low levels of testosterone during childhood, and duing puberty their levels become radically different.
Today’s selection, Homicide, is ev psych with a side of anthropology; I am excerpting the chapter on people-who-murder-children. (You are officially forewarned.)
Way back in middle school, I happened across (I forget how) my first university-level textbook, on historical European families and family law. I got through the chapter on infanticide before giving up, horrified that enough Germans were smushing their infants under mattresses or tossing them into the family hearth that the Holy Roman Empire needed to be laws specifically on the subject.
It was a disillusioning moment.
Daly and Wilson’s Homicide, 1988, contributes some (slightly) more recent data to the subject, (though of course it would be nice to have even more recent data.
(I think some of the oddities in # of incidents per year may be due to ages being estimated when the child’s true age isn’t known, eg, “headless torso of a boy about 6 years old found floating in the Thames.”)
We begin with a conversation on the subject of which child parents would favor in an emergency:
If parental motives are such as to promote the parent’s own fitness, then we should expect that parents will often be inclined to act so that neither sibling’s interests prevail completely. Typically, parental imposition of equity will involve supporting the younger, weaker competitor, even when the parent would favor the older if forced to choose between the two. It is this latter sort of situation–“Which do you save when one must be sacrificed?”–in which parents’ differential valuation of their children really comes to the fore. Recall that there were 11 societies in the ethnographic review of Chapter 3 for which it was reported that a newborn might be killed if the birth interval were too short or the brood too numerous. It should come as no surprise that there were no societies in which the prescribed solution to such a dilemma was said to be the death of an older child. … this reaction merely illustrates that one takes for granted the phenomenon under discussion, namely the gradual deepening of parental commitment and love.
*Thinks about question for a while* *flails* “BUT MY CHILDREN ARE ALL WONDERFUL HOW COULD I CHOSE?” *flails some more*
That said, I think there’s an alternative possibility besides just affection growing over time: the eldest child has already proven their ability to survive; an infant has not. The harsher the conditions of life (and thus, the more likelihood of actually facing a real situation in which you genuinely don’t have enough food for all of your children,) the higher the infant mortality rate. The eldest children have already run the infant mortality gauntlet and so are reasonably likely to make it to adulthood; the infants still stand a high chance of dying. Sacrificing the child you know is healthy and strong for the one with a high chance of dying is just stupid.
Whereas infant mortality is not one of my personal concerns.
Figure 4.4 shows that the risk of parental homicide is indeed a declining function of the child’s age. As we wold anticipate, the most dramatic decrease occurs between infants and 1-year-old children. One reason for expecting this is that the lion’s share of the prepubertal increase in reproductive value in natural environments occurs within the first year.
(I think “prepubertal increase in reproductive value” means “decreased likelihood of dying.”)
Moreover, if parental disinclination reflects any sort of assessment of the child’s quality or the mother’s situation, then an evolved assessment mechanisms should be such as to terminate any hopeless reproductive episode as early as possible, rather than to squander parental effort in an enterprise that will eventually be abandoned. … Mothers killed 61 in the first 6 months compared to just 27 in the second 6 months. For fathers, the corresponding numbers are 24 vs. 14. [See figure 4.4] … This pattern of victimization contrasts dramatically with the risk of homicide at the hands of nonrelatives (Figure 4.5)…
I would like to propose an alternative possibility: just as a child who attempts to drive a car is much more likely to crash immediately than to successfully navigate onto the highway and then crash, so a murderous person who gets their hands onto a child is more likely to kill it immediately than to wait a few years.
A similar mechanism may be at play in the apparent increase and then decrease in homicides of children by nonrelatives during toddlerhood. Without knowing anything about these cases, I can only speculate, but 1-4 are the ages when children are most commonly put into daycares or left with sitters while their moms return to work. The homicidally-minded among these caretakers, then, are likely to kill their charges sooner rather than later. (School-aged children, by contrast, are both better at running away from attackers and highly unlikely to be killed by their teachers.)
Teenagers are highly conflictual creatures, and the rate at which nonrelatives kill them explodes after puberty. When we consider the conspicuous, tempestuous conflicts that occur between teenagers and their parents–conflicts that apparently dwarf those of the preadolescent period–it is all the more remarkable that the risk of parental homicide continues its relentless decline to near zero.
… When mothers killed infants, the victims had been born to them at a mean age of 22.7 years, whereas older victims had been born at a mean maternal age of 24.5. Thi is a significant difference, but both means are signficantly below the 25.8 year that was the average age of all new Candian mothers during the same period, accoding to Cadian Vital Statistics.
In other words, impulsive fuckups who get accidentally pregnant are likely to be violent impulsive fuckups.
We find a similar result with respect to marital status: Mothers who killed older children are again intermediate between infanticidal women and the population-at-large. Whereas 51% of mothers committing infanticide were unmarried, the same was true of just 34% of those killing older children. This is still substantially above the 12% of Canadian births in which the new mother was unmarried …
Killing of an older child is often associated with maternal depression. Of the 95 mothers who killed a child beynd its infancy, 15.8% also committed suicide. … By contrast, only 2 of 88 infanticidal mothers committed suicide (and even this meager 2.3% probably overestimates the assocation of infanticide with suicide, since infanticides are the only category of homicides in which a significant incidence of undetected cases is likely.) … one of thee 2 killed three older children as well.
In the Canadian data, it is also noteworthy that 35% of maternal infanticides were attributed by the investigating police force … [as] “mentally ill or mentally retarded (insane),” verses 58% of maternal homicides of older children. Here and elsewhere, it seems that the sots of cases that are simultaneously rare and seemingly contrary to the actor’s interests–in both the Darwinian and the commonsense meaning of interest–also happen t be the sorts of cases most likely to be attributed to some sort of mental incompetence. … We identify as mad those people who lack a species-typical nepotistic perception of their interests or who no longer care to pursue them. …
Violent people go ahead and kill their kids; people who go crazy later kill theirs later.
We do at least know the ages of the 38 men who killed heir infant children: the mean was 26.3 years. Moreover, we know that fathers averaged 4 years older than mothers for that substantial majority of Canadian births that occurred within marriages… . Since the mean age for all new Canadian mothers during the relevant period… was 25.8, it seems clear that infanticidal fathers are indeed relatively young. And as was the case with mothers, infanticidal fathers were significantly younger than those fathers who killed older offspring. (mean age at the victim’s birth = 29.2 years). …
As with mothers, fathers who killed older children killed themselves as well significantly more often (43.6% of 101) than did those who killed their infant children (10.5% of 38). Also like mothers is the fact that those infanticidal fathers who did commit suicide were significantly older (mean age = 30.5 years) than those who did not (mean = 25.8). Likewise, the paternal age at which older victims had been born was also significantly greater for suicidal (mean = 31.1 years; N = 71) than for nonsuicidal (mean =27.5; N = 67) homicidal fathers. And men who killed their older children were a little more likely to be deemed mentally incompetent (20.8%) than those who killed their infants (15.8%). …
Fathers, however, were significantly less likely to commit suicide after killing an adult offspring (19% of 21 men) than a child (50% of 80 men.) … 20 of the 22 adult victims of their father were sons… three of the four adult victims of mothers were daughters. … There is no hint of such a same-ex bias in the killings of either infants… or older children. …
An infrequent but regular variety of homicide is that in which a man destroys his wife and children. A corresponding act of familicide by the wife is almost unheard of. …
No big surprises in this section.
Perhaps the most obvious prediction from a Darwinian view of parental motives is this: Substitute parents will generally tend to care less profoundly for their children than natural parents, with the result that children reared by people other than their natural parents will be more often exploited and otherwise at risk. Parental investment is a precious resource, and selection must favor those parental psyches that do not squander it on nonrelatives.
Disclaimer: obviously there are good stepparents who care deeply for their stepchilden. I’ve known quite a few. But I’ve also met some horrible stepparents. Given the inherent vulnerability of children, I find distasteful our society’s pushing of stepparenting as normal without cautions against its dangers. In most cases, remarriage seems to be undertaken to satisfy the parent, not the child.
In an interview study of stepparents in Cleveland, Ohio, for example–a study of predominantly middle-class group suffering no particular distress or dysfunction–Loise Duberman (1975) found that only 53% of stepfathers and 25% of stepmothers could claim to have “parental feeling” toward their stepchildren, and still fewer to “love” them.
Some of this may be influenced by the kinds of people who are likely to become stepparents–people with strong family instincts probably have better luck getting married to people like themselves and staying that way than people who are bad at relationships.
In an observational study of Trinidadian villagers, Mark Flinn (1988) found that stepfathers interacted less with “their” children than did natural fathers; that interactions were more likely to be aggressive within steprelationships than within the corresponding natural relationships; and that stepchildren left home at an earlier age.
Pop psychology and how-to manuals for stepfamilies have become a growth industry. Serious study of “reconstituted” families is also burgeoning. Virtually all of this literature is dominated by a single theme: coping with the antagonisms…
Here the authors stops to differentiate between between stepparenting and adoption, which they suspect is more functional due to adoptive parents actually wanting to be parents in the first place. However,
such children have sometimes been found to suffer when natural children are subsequently born to the adopting couple, a result that has led some professionals to counsel against adoption by childless couples until infertility is definitely established. …
Continuing on with stepparents:
The negative characterization of stepparents is by no means peculiar to our culture. … From Eskimos to Indonesians, through dozens of tales, the stepparent is the villain of every piece. … We have already encountered the Tikopia or Yanomamo husband who demands the death of his new wife’s prior children. Other solutions have included leaving the children with postmenopausal matrilineal relatives, and the levirate, a wide-spread custom by which a widow and her children are inherited by the dead man’s brother or other near relative. …
Social scientists have turned this scenario on its head. The difficulties attending steprelationships–insofar as they are acknowledged at all–are presumed to be caused by the “myth of the cruel stepparent” and the child’s fears.
Why this bizarre counterintuitive view is the conventional wisdom would be a topic for a longer book than this; suffice to say that the answer surely has more to do with ideology than with evidence. In any event, social scientists have staunchly ignored the question of the factual basis for the negative “stereotyping” of stepparents.
Under Freud’s logic, all sorts of people who’d been genuinely hurt by others were summarily dismissed, told that they were the ones who actually harbored ill-will against others and were just “projecting” their emotions onto their desired victims.
Freudianism is a crock of shit, but in this case, it helped social “reformers” (who of course don’t believe in silly ideas like evolution) discredit people’s perfectly reasonable fears in order to push the notion that “family” doesn’t need to follow traditional (ie, biological) forms, but can be reinvented in all sorts of novel ways.
So are children at risk in stepparent homes in contemporary North America? [see Figures 4.7 and 4.8.] … There is … no appreciable statistical confounding between steprelationships and poverty in North America. … Stepparenthood per se remains the single most powerful risk factor for child abuse that has yet been identified. (here and throughout this discussion “stepparents” include both legal and common-law spouses of the natural parent.) …
Speaking of Figures 4.7 and 4.8, I must say that the kinds of people who get divorced (or were never married) and remarried within a year of their kid’s birth are likely to be unstable people who tend to pick particularly bad partners, and the kinds of people willing to enter into a relationship with someone who has a newborn is also likely to be, well, unusual. Apparently homicidal.
By contrast, the people who are willing to marry someone who already has, say, a ten year old, may be relatively normal folks.
Just how great an elevation of risk are we talking about? Our efforts to answer that question have been bedeviled by a lack of good information in the living arrangements of children in the general population. … there are no official statistics [as of when this was written] on the numbers of children of each age who live in each household type. There is no question that the 43% of murdered American child abuse victims who dwelt with substitute parents is far more than would be expected by chance, but estimates of that expected percentage can only be derived from surveys that were designed to answer other questions. For a random sample of American children in 1976, … the best available national survey… indicates that only about 1% or fewer would be expected to have dwelt with a substitute parent. An American child living with one or more substitute parents in 1976 was therefore approximately 100 times as likely to be fatally abused as a child living with natural parents only…
Results for Canada are similar. In Hamilton, Ontario in 1983, for example, 16% of child abuse victims under 5 years of age lived with a natural parent and a stepparent… Since small children very rarely have stepparents–less than 1% of preschoolers in Hamilton in 1983, for example–that 16% represents forty times the abuse rate for children of the same age living with natural parents. … 147 Canadian children between the ages of 1 and 4 were killed by someone in loco parentis between 1974 and 1983; 37 of those children (25.2%) were the victims of their stepparents, and another 5 (3.4%) were killed by unrelated foster parents.
…The survey shows, for example, that 0.4% of 2,852 Canadian children, aged 1-4 in 1984, lived with a stepparent. … For the youngest age group in Figure 4.9, those 2 years of age and younger, the risk from a stepparent is approximately 70 times that from a natural parent (even though the later category includes all infanticides by natural mothers.)
Now we need updated data. I wonder if abortion has had any effect on the rates of infanticide and if increased public acceptance of stepfamilies has led to more abused children or higher quality people being willing to become stepparents.
Honestly, left to my own devices, I wouldn’t own a TV. (With Mythbusters canceled, what’s the point anymore?)
Don’t get me wrong. I have watched (and even enjoyed) the occasional sitcom. I’ve even tried watching football. I like comedies. They’re funny. But after they end, I get that creeping feeling of emptiness inside, like when you’ve eaten a bowl of leftover Halloween candy instead of lunch. There is no “meat” to these programs–or vegan-friendly vegetable protein, if you prefer.
I do enjoy documentaries, though I often end up fast-forwarding through large chunks of them because they are full of filler shots of rotating galaxies or astronomers parking their telescopes or people… taalkiiing… sooo… sloooowwwwlllly… And sadly, if you’ve seen one documentary about ancient Egypt, you’ve seen them all.
Ultimately, time is a big factor: I am always running short. Once I’m done with the non-negotiables (like “take care of the kids” and “pay the bills,”) there’s only so much time left, and time spent watching TV is time not spent writing. Since becoming a competent writer is one of my personal goals, TV gets punted to the bottom of the list, slightly below doing the dishes.
Obviously not everyone writes, but I have a dozen other backup projects for when I’m not writing, everything from “read more books” to “volunteer” to “exercise.”
I think it is a common fallacy to default to assuming that other people are like oneself. I default to assuming that other people are time-crunched, running on 8 shots of espresso and trying to cram in a little time to read Tolstoy and get the tomatoes planted before they fall asleep. (And I’m not even one of those Type-A people.)
Obviously everyone isn’t like me. They come home from work, take care of their kids, make dinner, and flip on the TV.
An acquaintance recently made a sad but illuminating comment regarding their favorite TV shows, “I know they’re not real, but it feels like they are. It’s like they’re my friends.”
I think the simple answer is that we process the pictures on the TV as though they were real. TV people look like people and sound like people, so who cares if they don’t smell like people? Under normal (pre-TV) circumstances, if you hung out with some friendly, laughing people every day in your living room, they were your family. You liked them, they liked you, and you were happy together.
Today, in our atomized world of single parents, only children, spinsters and eternal bachelors, what families do we have? Sure, we see endless quantities of people on our way to work, but we barely speak, nod, or glance at each other, encapsulated within our own cars or occupied with checking Facebook on our cellphones while the train rumbles on.
As our connections to other people have withered away, we’ve replaced them with fake ones.
OZZIE & HARRIET: The Adventures of America’s Favorite Family
The Adventures of Ozzie and Harriet was the first and longest-running family situational comedy in television history. The Nelsons came to represent the idealized American family of the 1950s – where mom was a content homemaker, dad’s biggest decision was whether to give his sons the keys to the car, and the boys’ biggest problem was getting a date to the high school prom. …When it premiered, Ozzie & Harriet: The Adventures of America’s Favorite Family was the highest-rated documentary in A&E’s history.
(According to Wikipedia, Ozzie and Harriet started on the radio back in the 30s, got a comedy show (still on radio) in 1944, and were on TV from 1952-1966.) It was, to some extent, about a real family–the actors in the show were an actual husband and wife + their kids, but the show itself was fictionalized.
It even makes sense to people to ask them, “Who is your favorite TV personality?“–to which the most common answer isn’t Adam Savage or James Hyneman, but Mark Harmon, who plays some made-up guy named Leroy Jethro Gibbs.
The rise of “reality TV” only makes the “people want to think of the TV people as real people they’re actually hanging out with” all the more palpable–and then there’s the incessant newsstand harping of celebrity gossip. The only thing I want out of a movie star (besides talent) is that I not recognize them; it appears that the only thing everyone else wants is that they do recognize them.
in Blockbusters: Hit-Making, Risk-Taking, and the Big Business of Entertainment,the new book by Anita Elberse, Filene professor of business administration. Elberse (el-BER-see) spent 10 years interviewing and observing film, television, publishing, and sports executives to distill the most profitable strategy for these high-profile, unpredictable marketplaces. … The most profitable business strategy, she says, is not the “long tail,” but its converse: blockbusters like Star Wars, Avatar, Friends, the Harry Potter series, and sports superstars like Tom Brady.
Strategically, the blockbuster approach involves “making disproportionately big investments in a few products designed to appeal to mass audiences,” … “Production value” means star actors and special effects. … a studio can afford only a few “event movies” per year. But Horn’s big bets for Warner Brothers—the Harry Potter series, The Dark Knight, The Hangover and its sequel, Ocean’s Eleven and its two sequels, Sherlock Holmes—drew huge audiences. By 2011, Warner became the first movie studio to surpass $1 billion in domestic box-office receipts for 11 consecutive years. …
Jeff Zucker ’86 put a contrasting plan into place as CEO at NBC Universal. In 2007 he led a push to cut the television network’s programming costs: … Silverman began cutting back on expensive dramatic content, instead acquiring rights to more reasonably priced properties; eschewing star actors and prominent TV producers, who commanded hefty fees; and authorizing fewer costly pilots for new series. The result was that by 2010, NBC was no longer the top-rated TV network, but had fallen to fourth place behind ABC, CBS, and Fox, and “was farther behind on all the metrics that mattered,” writes Elberse, “including, by all accounts, the profit margins Zucker and Silverman had sought most.” Zucker was asked to leave his job in 2010. …
From a business perspective, “bankable” movies stars like Julia Roberts, Johnny Depp, or George Clooney function in much the way Harry Potter and Superman do: providing a known, well-liked persona.
So people like seeing familiar faces in their movies (except Oprah Winfrey, who is apparently not a draw:
the 1998 film Beloved, starring Oprah Winfrey, based on Nobel Prize-winner Toni Morrison’s eponymous 1987 novel and directed by Oscar-winner Jonathan Demme … flopped resoundingly: produced for $80 million, it sold only $23 million in tickets.
Or maybe Beloved isn’t just the kind of feel-good action flick that drives movie audiences the way Batman is.)
But what about sports?
Here I am on even shakier ground than sitcoms. I can understand playing sports–they’re live action versions of video games, after all. You get to move around, exercise, have fun with your friends, and triumphantly beat them at something. (Or if you’re me, lose.) I can understand cheering for your kids and being proud of them as they get better and better at some athletic skill (or at least try hard at it.)
I don’t understand caring about strangers playing a game.
I have no friends on the Yankees or the Mets, the Phillies or the Marlins. I’ve never met a member of the Alabama Crimson Tide or the Clemson Tigers, and I harbor no illusions that my children will ever play on such teams. I feel no loyalty to the athletes-drawn-from-all-over-the-country who play on my “hometown” team, and I consider athlete salaries vaguely obscene.
I find televised sports about as interesting as watching someone do math. If the point of the game is to win, then why not just watch a 5-minute summary at the end of the day of all the teams’ wins and losses?
But according to The Way of the Blockbuster:
Perhaps no entertainment realm takes greater care in building a brand name than professional sports: fan loyalty reliably builds repeat business. “The NFL is blockbuster content,” Elberse says. “It’s the most sought-after content we have in this country. Four of the five highest-rated television shows [in the United States] ever are Super Bowls. NFL fans spend an average of 9.5 hours per week on games and related content. That gives the league enormous power when it comes to negotiating contracts with television networks.”
Elberse has studied American football and basketball and European soccer, and found that selling pro sports has much in common with selling movies, TV shows, or books. Look at the Real Madrid soccer club—the world’s richest, with annual revenues of $693 million and a valuation of $3.3 billion. Like Hollywood studios, Real Madrid attracts fan interest by engaging superstars—such as Cristiano Ronaldo, the Portuguese forward the club acquired from Manchester United for a record $131.6 million in 2009. “We think of ourselves as content producers,” a Real Madrid executive told Elberse, “and we think of our product—the match—as a movie.” As she puts it: “It might not have Tom Cruise in it, but they do have Cristiano Ronaldo starring.
In America, sports stars are famous enough that even I know some of their names, like Peyton Manning, Serena Williams, and Michel Jackson Jordan.
I think the basic drive behind people’s love of TV sports is the same as their love of sitcoms (and dramas): they process it as real. And not just real, but as people they know: their family, their tribe. Those are their boys out there, battling for glory and victory against that other tribes’s boys. It’s vicarious warfare with psuedo armies, a domesticated expression of the tribal urge to slaughter your enemies, drive off their cattle and abduct their women. So what if the army isn’t “real,” if the heroes aren’t your brother or cousin but paid gladiators shipped in from thousands of miles away to perform for the masses? Your brain still interprets it as though it were; you still enjoy it.
Continuing with yesterday’s discussion (in response to a reader’s question):
Why are people snobs about intelligence?
Is math ability better than verbal?
Do people only care about intelligence in the context of making money?
1. People are snobs. Not all of them, obviously–just a lot of them.
So we’re going to have to back this up a step and ask why are people snobs, period.
Paying attention to social status–both one’s own and others’–is probably instinctual. We process social status in our prefrontal cortexes–the part of our brain generally involved in complex thought, imagination, long-term planning, personality, not being a psychopath, etc. Our brains respond positively to images of high-status items–activating reward-feedback loop that make us feel good–and negatively to images of low-status items–activating feedback loops that make us feel bad.
…researchers asked a person if the following statement was an accurate description of themselves: “I wouldn’t hesitate to go out of my way to help someone in trouble.” Some of the participants answered the question without anyone else seeing their response. Others knowingly revealed their answer to two strangers who were watching in a room next to them via video feed. The result? When the test subjects revealed an affirmative answer to an audience, their [medial prefrontal cortexes] lit up more strongly than when they kept their answers to themselves. Furthermore, when the participants revealed their positive answers not to strangers, but to those they personally held in high regard, their MPFCs and reward striatums activated even more strongly. This confirms something you’ve assuredly noticed in your own life: while we generally care about the opinions of others, we particularly care about the opinions of people who really matter to us.
(Note what constitutes a high-status activity.)
But this alone does not prove that paying attention to social status is instinctual. After all, I can also point to the part of your brain that processes written words (the Visual Word Form Area,) and yet I don’t assert that literacy is an instinct. For that matter, anything we think about has to be processed in our brains somewhere, whether instinct or not.
Better evidence comes from anthropology and zoology. According to Wikipedia, “All societies have a form of social status,” even hunter-gatherers. If something shows up in every single human society, that’s a pretty good sign that it is probably instinctual–and if it isn’t, it is so useful a thing that no society exists without it.
Among animals, social status is generally determined by a combination of physical dominance, age, relationship, and intelligence. Killer whale pods, for example, are led by the eldest female in the family; leadership in elephant herds is passed down from a deceased matriarch to her eldest daughter, even if the matriarch has surviving sisters. Male lions assert dominance by being larger and stronger than other lions.
In all of these cases, the social structure exists because it benefits the group, even if it harms some of the individuals in it. If having no social structure were beneficial for wolves, then wolf packs without alpha wolves would out-compete packs with alphas. This is the essence of natural selection.
Among humans, social status comes in two main forms, which I will call “earned” and “background.”
“Earned” social status stems from things you do, like rescuing people from burning buildings, inventing quantum physics, or stealing wallets. High status activities are generally things that benefit others, and low-status activities are generally those that harm others. This is why teachers are praised and thieves are put in prison.
Earned social status is a good thing, because it reward people for being helpful.
“Background” social status is basically stuff you were born into or have no effect over, like your race, gender, the part of the country you grew up in, your accent, name, family reputation, health/disability, etc.
Americans generally believe that you should not judge people based on background social status, but they do it, anyway.
Interestingly, high-status people are not generally violent. (Just compare crime rates by neighborhood SES.) Outside of military conquest, violence is the domain of the low-class and those afraid they are slipping in social class, not the high class. Compare Andrea Merkel to the average German far-right protester. Obviously the protester would win in a fist-fight, but Merkel is still in charge. High class people go out of their way to donate to charity, do volunteer work, and talk about how much they love refugees. In the traditional societies of the Pacific Northwest, they held potlatches at which they distributed accumulated wealth to their neighbors; in our society, the wealthy donate millionsto education. Ideally, in a well-functioning system, status is the thanks rich people get for doing things that benefit the community instead of spending their billions on gold-plated toilets.
The Arabian babbler … spends most of its life in small groups of three to 20 members. These groups lay their eggs in a communal nest and defend a small territory of trees and shrubs that provide much-needed safety from predators.
When it’s living as part of a group, a babbler does fairly well for itself. But babblers who get kicked out of a group have much bleaker prospects. These “non-territorials” are typically badgered away from other territories and forced out into the open, where they often fall prey to hawks, falcons, and other raptors. So it really pays to be part of a group. … Within a group, babblers assort themselves into a linear and fairly rigid dominance hierarchy, i.e., a pecking order. When push comes to shove, adult males always dominate adult females — but mostly males compete with males and females with females. Very occasionally, an intense “all-out” fight will erupt between two babblers of adjacent rank, typically the two highest-ranked males or the two highest-ranked females. …
Most of the time, however, babblers get along pretty well with each other. In fact, they spend a lot of effort actively helping one another and taking risks for the benefit of the group. They’ll often donate food to other group members, for example, or to the communal nestlings. They’ll also attack foreign babblers and predators who have intruded on the group’s territory, assuming personal risk in an effort to keep others safe. One particularly helpful activity is “guard duty,” in which one babbler stands sentinel at the top of a tree, watching for predators while the rest of the group scrounges for food. The babbler on guard duty not only foregoes food, but also assumes a greater risk of being preyed upon, e.g., by a hawk or falcon. …
Unlike chickens, who compete to secure more food and better roosting sites for themselves, babblers compete to give food away and to take the worst roosting sites. Each tries to be more helpful than the next. And because it’s a competition, higher-ranked (more dominant) babblers typically win, i.e., by using their dominance to interfere with the helpful activities of lower-ranked babblers. This competition is fiercest between babblers of adjacent rank. So the alpha male, for example, is especially eager to be more helpful than the beta male, but doesn’t compete nearly as much with the gamma male. Similar dynamics occur within the female ranks.
In the eighteenth and early nineteenth century, wealthy private individuals substantially supported the military, with a particular wealthy men buying stuff for a particular regiment or particular fort.
Noblemen paid high prices for military commands, and these posts were no sinecure. You got the obligation to substantially supply the logistics for your men, the duty to obey stupid orders that would very likely lead to your death, the duty to lead your men from in front while wearing a costume designed to make you particularly conspicuous, and the duty to engage in honorable personal combat, man to man, with your opposite number who was also leading his troops from in front.
A vestige of this tradition remains in that every English prince has been sent to war and has placed himself very much in harm’s way.
It seems obvious to me that a soldier being led by a member of the ruling class who is soaking up the bullets from in front is a lot more likely to be loyal and brave than a soldier sent into battle by distant rulers safely in Washington who despise him as a sexist homophobic racist murderer, that a soldier who sees his commander, a member of the ruling classes, fighting right in front of him, is reflexively likely to fight.
(Note, however, that magnanimity is not the same as niceness. The only people who are nice to everyone are store clerks and waitresses, and they’re only nice because they have to be or they’ll get fired.)
Most people are generally aware of each others’ social statuses, using contextual clues like clothing and accents to make quick, rough estimates. These contextual clues are generally completely neutral–they just happen to correlate with other behaviors.
For example, there is nothing objectively good or bad for society about wearing your pants belted beneath your buttocks, aside from it being an awkward way to wear your pants. But the style correlates with other behaviors, like crime, drug use, and aggression, low paternal investment, and unemployment, all of which are detrimental to society, and so the mere sight of underwear spilling out of a man’s pants automatically assigns him low status. There is nothing causal in this relationship–being a criminal does not make you bad at buckling your pants, nor does wearing your pants around your knees somehow inspire you to do drugs. But these things correlate, and humans are very good at learning patterns.
Likewise, there is nothing objectively better about operas than Disney movies, no real difference between a cup of coffee brewed in the microwave and one from Starbucks; a Harley Davidson and a Vespa are both motorcycles; and you can carry stuff around in just about any bag or backpack, but only the hoity-toity can afford something as objectively hideous as a $26,000 Louis Vutton backpack.
All of these things are fairly arbitrary and culturally dependent–the way you belt your pants can’t convey social status in a society where people don’t wear pants; your taste in movies couldn’t matter before movies were invented. Among hunter-gatherers, social status is based on things like one’s skills at hunting, and if I showed up to the next PTA meeting wearing a tophat and monocle, I wouldn’t get any status points at all.
We tend to aggregate the different social status markers into three broad classes (middle, upper, and lower.) As Scott Alexander says in his post about Siderea’s essay on class in America, which divides the US into 10% Underclass, 65% Working Class, 23.5% Gentry Class, and 1.5% Elite:
Siderea notes that Church’s analysis independently reached about the same conclusion as Paul Fussell’s famous guide. I’m not entirely sure how you’d judge this (everybody’s going to include lower, middle, and upper classes), but eyeballing Fussell it does look a lot like Church, so let’s grant this.
It also doesn’t sound too different from Marx. Elites sound like capitalists, Gentry like bourgeoisie, Labor like the proletariat, and the Underclass like the lumpenproletariat. Or maybe I’m making up patterns where they don’t exist; why should the class system of 21st century America be the same as that of 19th century industrial Europe?
There’s one more discussion of class I remember being influenced by, and that’s Unqualified Reservations’ Castes of the United States. Another one that you should read but that I’ll summarize in case you don’t:
1. Dalits are the underclass, … 2. Vaisyas are standard middle-class people … 3. Brahmins are very educated people … 4. Optimates are very rich WASPs … now they’re either extinct or endangered, having been pretty much absorbed into the Brahmins. …
Michael Church’s system (henceforth MC) and the Unqualified Reservation system (henceforth UR) are similar in some ways. MC’s Underclass matches Dalits, MC’s Labor matches Vaisyas, MC’s Gentry matches Brahmins, and MC’s Elite matches Optimates. This is a promising start. It’s a fourth independent pair of eyes that’s found the same thing as all the others. (commenters bring up Joel Kotkin and Archdruid Report as similar convergent perspectives).
I suspect the tendency to try to describe society as consisting of three broad classes (with the admission that other, perhaps tiny classes that don’t exactly fit into the others might exist) is actually just an artifact of being a three-biased society that likes to group things in threes (the Trinity, three-beat joke structure, three bears, Three Musketeers, three notes in a chord, etc.) This three-bias isn’t a human universal (or so I have read) but has probably been handed down to us from the Indo-Europeans, (“Many Indo-European societies know a threefold division of priests, a warriorclass, and a class of peasants or husbandmen. Georges Dumézil has suggested such a division for Proto-Indo-European society,”) so we’re so used to it that we don’t even notice ourselves doing it.
(For more information on our culture’s three-bias and different number biases in other cultures, see Alan Dundes’s Interpreting Folklore, though I should note that I read it back in highschool and so my memory of it is fuzzy.)
(Also, everyone is probably at least subconsciously cribbing Marx, who was probably cribbing from some earlier guy who cribbed from another earlier guy, who set out with the intention of demonstrating that society–divided into nobles, serfs, and villagers–reflected the Trinity, just like those Medieval maps that show the world divided into three parts or the conception of Heaven, Hell, and Purgatory.)
At any rate, I am skeptical of any system that lumps 65% of people into one social class and 0.5% of people into a different social class as being potentially too-finely grained at one end of the scale and not enough at the other. Determining the exact number of social classes in American society may ultimately be futile–perhaps there really are three (or four) highly distinct groups, or perhaps social classes transition smoothly from one to the next with no sharp divisions.
I lean toward the latter theory, with broad social classes as merely a convenient shorthand for extremely broad generalizations about society. If you look any closer, you tend to find that people do draw finer-grained distinctions between themselves and others than “65% Working Class” would imply. For example, a friend who works in agriculture in Greater Appalachia once referred dismissively to other people they had to deal with as “red necks.” I might not be able to tell what differentiates them, but clearly my friend could. Similarly, I am informed that there are different sorts of homelessness, from true street living to surviving in shelters, and that lifetime homeless people are a different breed altogether. I might call them all “homeless,” but to the homeless, these distinctions are important.
Is social class evil?
This question was suggested by a different friend.
I suspect that social class is basically, for the most part, neutral-to-useful. I base this on the fact that most people do not work very hard to erase markers of class distinction, but instead actively embrace particular class markers. (Besides, you can’t get rid of it, anyway.)
It is not all that hard to learn the norms and values of a different social class and strategically employ them. Black people frequently switch between speaking African American Vernacular English at home and standard English at work; I can discuss religion with Christian conservatives and malevolent AI risk with nerds; you can purchase a Harley Davidson t-shirt as easily as a French beret and scarf.
(I am reminded here of an experiment in which researchers were looking to document cab drivers refusing to pick up black passengers; they found that when the black passengers were dressed nicely, drivers would pick them up, but when they wore “ghetto” clothes, the cabs wouldn’t. Cabbies: responding more to perceived class than race.)
And yet, people don’t–for the most part–mass adopt the social markers of the upper class just to fool them. They love their motorcycle t-shirts, their pumpkin lattes, even their regional accents. Class markers are an important part of peoples’ cultural / tribal identities.
But what about class conflicts?
Because every class has its own norms and values, every class is, to some degree, disagreeing with the other classes. People for whom frugality and thrift are virtues will naturally think that people who drink overpriced coffee are lacking in moral character. People for whom anti-racism is the highest virtue will naturally think that Trump voters are despicable racists. A Southern Baptist sees atheists as morally depraved fetus murderers; nerds and jocks are famously opposed to each other; and people who believe that you should graduate from college, become established in your career, get married, and then have 0-1.5 children disapprove of people who drop out of highschool, have a bunch of children with a bunch of different people, and go on welfare.
A moderate sense of pride in one’s own culture is probably good and healthy, but spending too much energy hating other groups is probably negative–you may end up needlessly hurting people whose cooperation you would have benefited from, reducing everyone’s well-being.
(A good chunk of our political system’s dysfunctions are probably due to some social classes believing that other social classes despise them and are voting against their interests, and so counter-voting to screw over the first social class. I know at least one person who switched allegiance from Hillary to Trump almost entirely to stick it to liberals they think look down on them for classist reasons.)
Ultimately, though, social class is with us whether we like it or not. Even if a full generation of orphan children were raised with no knowledge of their origins and completely equal treatment by society at large, each would end up marrying/associating with people who have personalities similar to themselves (and remember that genetics plays a large role in personality.) Just as current social classes in America are ethnically different, (Southern whites are drawn from different European populations than Northern whites, for example,) so would the society resulting from our orphanage experiment differentiate into genetically and personalityish-similar groups.
Why do Americans generally proclaim their opposition to judging others based on background status, and then act classist, anyway? There are two main reasons.
As already discussed, different classes have real disagreements with each other. Even if I think I shouldn’t judge others, I can’t put aside my moral disgust at certain behaviors just because they happen to correlate with different classes.
It sounds good to say nice, magnanimous things that make you sound more socially sensitive and aware than others, like, “I wouldn’t hesitate to go out of my way to help someone in trouble.” So people like to say these things whether they really mean them or not.
In reality, people are far less magnanimous than they like to claim they are in front of their friends. People like to say that we should help the homeless and save the whales and feed all of the starving children in Africa, but few people actually go out of their way to do such things.
There is a reason Mother Teresa is considered a saint, not an archetype.
In real life, not only does magnanimity has a cost, (which the rich can better afford,) but if you don’t live up to your claims, people will notice. If you talk a good talk about loving others but actually mistreat them, people will decide that you’re a hypocrite. On the internet, you can post memes for free without havng to back them up with real action, causing discussions to descend into competitive-virtue signalling in which no one wants to be the first person to admit that they actually are occasionally self-interested. (Cory Doctorow has a relevant discussion about how “reputations economies”–especially internet-based ones–can go horribly wrong.)
Unfortunately, people often confuse background and achieved status.
American society officially has no hereditary social classes–no nobility, no professions limited legally to certain ethnicities, no serfs, no Dalits, no castes, etc. Officially, if you can do the job, you are supposed to get it.
Most of us believe, at least abstractly, that you shouldn’t judge or discriminate against others for background status factors they have no control over, like where they were born, the accent thy speak with, or their skin tone. If I have two resumes, one from someone named Lakeesha, and the other from someone named Ian William Esquire III, I am supposed to consider each on their merits, rather than the connotations their names invoke.
But because “status” is complicated, people often go beyond advocating against “background” status and also advocate that we shouldn’t accord social status for any reasons. That is, full social equality.
This is not possible and would be deeply immoral in practice.
When you need heart surgery, you really hope that the guy cutting you open is a top-notch heart surgeon. When you’re flying in an airplane, you hope that both the pilot and the guys who built the plane are highly skilled. Chefs must be good at cooking and authors good at writing.
These are all forms of earned status, and they are good.
Smart people are valuable to society because they do nice things like save you from heart attacks or invent cell-phones. This is not “winning at capitalism;” this is benefiting everyone around them. In this context, I’m happy to let smart people have high status.
In a hunter-gatherer society, smart people are the ones who know the most about where animals live and how to track them, how to get water during a drought, and where that 1-inch stem they spotted last season that means a tasty underground tuber is located. Among nomads, smart people are the ones with the biggest mental maps of the territory, the folks who know the safest and quickest routes from good summer pasture to good winter pasture, how to save an animal from dying and how to heal a sick person. Among pre-literate people, smart people composed epic poems that entertained their neighbors for many winters’ nights, and among literate ones, the smart people became scribes and accountants. Even the communists valued smart people, when they weren’t chopping their heads off for being bourgeois scum.
So even if we say, abstractly, “I value all people, no matter how smart they are,” the smart people do more of the stuff that benefits society than the dumb people, which means they end up with higher social status.
So, yes, high IQ is a high social status marker, and low IQ is a low social status marker, and thus at least some people will be snobs about signaling their IQ and their disdain for dumb people.
I am speaking here very abstractly. There are plenty of “high status” people who are not benefiting society at all. Plenty of people who use their status to destroy society while simultaneously enriching themselves. And yes, someone can come into a community, strip out all of its resources and leave behind pollution and unemployment, and happily call it “capitalism” and enjoy high status as a result.
I would be very happy if we could stop engaging in competitive holiness spirals and stop lionizing people who became wealthy by destroying communities. I don’t want capitalism at the expense of having a pleasant place to live in.
As we were discussing yesterday, I theorize that people have neural feedback loops that reward them for conforming/imitating others/obeying authorities and punish them for disobeying/not conforming.
This leads people to obey authorities or go along with groups even when they know, logically, that they shouldn’t.
There are certainly many situations in which we want people to conform even though they don’t want to, like when my kids have to go to bed or buckle their seatbelts–as I said yesterday, the feedback loop exists because it is useful.
But there are plenty of situations where we don’t want people to conform, like when trying to brainstorm new ideas.
Under what conditions will people disobey authority?
But in person, people may disobey authorities when they have some other social systtem to fall back on. If disobeying an authority in Society A means I lose social status in Society A, I will be more likely to disobey if I am a member in good standing in Society B.
If I can use my disobedience against Authority A as social leverage to increase my standing in Society B, then I am all the more likely to disobey. A person who can effectively stand up to an authority figure without getting punished must be, our brains reason, a powerful person, an authority in their own right.
Teenagers do this all the time, using their defiance against adults, school, teachers, and society in general to curry higher social status among other teenagers, the people they actually care about impressing.
SJWs do this, too:
I normally consider the president of Princeton an authority figure, and even though I probably disagree with him on far more political matters than these students do, I’d be highly unlikely to be rude to him in real life–especially if I were a student he could get expelled from college.
But if I had an outside audience–Society B–clapping and cheering for me behind the scenes, the urge to obey would be weaker. And if yelling at the President of Princeton could guarantee me high social status, approval, job offers, etc., then there’s a good chance I’d do it.
But then I got to thinking: Are there any circumstances under which these students would have accepted the president’s authority?
Obviously if the man had a proven track record of competently performing a particular skill the students wished to learn, they might follow hi example.
If authority works via neural feedback loops, employing some form of “mirror neurons,” do these systems activate more strongly when the people we are perceiving look more like ourselves (or our internalized notion of people in our “tribe” look like, since mirrors are a recent invention)?
In other words, what would a cross-racial version of the Milgram experiment look like?
Unfortunately, it doesn’t look like anyone has tried it (and to do it properly, it’d need to be a big experiment, involving several “scientists” of different races [so that the study isn’t biased by one “scientist” just being bad at projecting authority] interacting with dozens of students of different races, which would be a rather large undertaking.) I’m also not finding any studies on cross-racial authority (I did find plenty of websites offering practical advice about different groups’ leadership styles,) though I’m sure someone has studied it.
However, I did find cross-racial experiments on empathy, which may involve the same brain systems, and so are suggestive:
Using transcranial magnetic stimulation, we explored sensorimotor empathic brain responses in black and white individuals who exhibited implicit but not explicit ingroup preference and race-specific autonomic reactivity. We found that observing the pain of ingroup models inhibited the onlookers’ corticospinal system as if they were feeling the pain. Both black and white individuals exhibited empathic reactivity also when viewing the pain of stranger, very unfamiliar, violet-hand models. By contrast, no vicarious mapping of the pain of individuals culturally marked as outgroup members on the basis of their skin color was found. Importantly, group-specific lack of empathic reactivity was higher in the onlookers who exhibited stronger implicit racial bias.
Using the event-related potential (ERP) approach, we tracked the time-course of white participants’ empathic reactions to white (own-race) and black (other-race) faces displayed in a painful condition (i.e. with a needle penetrating the skin) and in a nonpainful condition (i.e. with Q-tip touching the skin). In a 280–340 ms time-window, neural responses to the pain of own-race individuals under needle penetration conditions were amplified relative to neural responses to the pain of other-race individuals displayed under analogous conditions.
In this study, we used functional magnetic resonance imaging (fMRI) to investigate how people perceive the actions of in-group and out-group members, and how their biased view in favor of own team members manifests itself in the brain. We divided participants into two teams and had them judge the relative speeds of hand actions performed by an in-group and an out-group member in a competitive situation. Participants judged hand actions performed by in-group members as being faster than those of out-group members, even when the two actions were performed at physically identical speeds. In an additional fMRI experiment, we showed that, contrary to common belief, such skewed impressions arise from a subtle bias in perception and associated brain activity rather than decision-making processes, and that this bias develops rapidly and involuntarily as a consequence of group affiliation. Our findings suggest that the neural mechanisms that underlie human perception are shaped by social context.
None of these studies shows definitevely whether or not in-group vs. out-group biases are an inherent feature of neurological systems, or Avenanti’s finding that people were more empathetic toward a purple-skinned person than to a member of a racial out-group suggests that some amount of learning is involved in the process–and that rather than comparing people against one’s in-group, we may be comparing them against our out-group.
At any rate, you may get similar outcomes either way.
In cases where you want to promote group cohesion and obedience, it may be beneficial to sort people by self-identity.
In cases where you want to guard against groupthink, obedience, or conformity, it may be beneficial to mix up the groups. Intellectual diversity is great, but even ethnic diversity may help people resist defaulting to obedience, especially when they know they shouldn’t.
Using data from two panel studies on U.S. firms and an online experiment, we examine investor reactions to increases in board diversity. Contrary to conventional wisdom, we find that appointing female directors has no impact on objective measures of performance, such as ROA, but does result in a systematic decrease in market value.
(Solal argues that investors may perceive the hiring of women–even competent ones–as a sign that the company is pursuing social justice goals instead of money-making goals and dump the stock.)
Additionally, diverse companies may find it difficult to work together toward a common goal–there is a good quantity of evidence that increasing diversity decreases trust and inhibits group cohesion. EG, from The downside of diversity:
IT HAS BECOME increasingly popular to speak of racial and ethnic diversity as a civic strength. From multicultural festivals to pronouncements from political leaders, the message is the same: our differences make us stronger.
But a massive new study, based on detailed interviews of nearly 30,000 people across America, has concluded just the opposite. Harvard political scientist Robert Putnam — famous for “Bowling Alone,” his 2000 book on declining civic engagement — has found that the greater the diversity in a community, the fewer people vote and the less they volunteer, the less they give to charity and work on community projects. In the most diverse communities, neighbors trust one another about half as much as they do in the most homogenous settings. The study, the largest ever on civic engagement in America, found that virtually all measures of civic health are lower in more diverse settings.
As usual, I suspect there is an optimum level of diversity–depending on a group’s purpose and its members’ preferences–that helps minimize groupthink while still preserving most of the benefits of cohesion.
So I was thinking the other day about the question of why do people go along with others and do things even when they know they believe (or know) they shouldn’t. As Tolstoy asks, why did the French army go along with this mad idea to invade Russia in 1812? Why did Milgram’s subjects obey his orders to “electrocute” people? Why do I feel emotionally distressed when refusing to do something, even when I have very good reasons to refuse?
As I mentioned ages ago, I suspect that normal people have neural circuits that reward them for imitating others and punish them for failing to imitate. Mirror neurons probably play a critical role in this process, but probably aren’t the complete story.
These feedback loops are critical for learning–infants only a few months old begin the process of learning to talk by moving their mouths and making “ba ba” noises in imitation of their parents. (Hence why it is called “babbling.”) They do not consciously say to themselves, “let me try to communicate with the big people by making their noises;” they just automatically move their faces to match the faces you make at them. It’s an instinct.
You probably do this, too. Just watch what happens when one person in a room yawns and then everyone else feels compelled to do it, too. Or if you suddenly turn and look at something behind the group of people you’re with–others will likely turn and look, too.
Autistic infants have trouble with imitation, (and according to Wikipedia, several studies have found abnormalities in their mirror neuron systems, though I suspect the matter is far from settled–among other things, I am not convinced that everyone with an ASD diagnosis actually has the same thing going on.) Nevertheless, there is probably a direct link between autistic infants’ difficulties with imitation and their difficulties learning to talk.
For adults, imitation is less critical (you can, after all, consciously decide to learn a new language,) but still important for survival. If everyone in your village drinks out of one well and avoids the other well, even if no one can explain why, it’s probably a good idea to go along and only drink out of the “good” well. Something pretty bad probably happened to the last guy who drank out of the “bad” well, otherwise the entire village wouldn’t have stopped drinking out of it. If you’re out picking berries with your friends when suddenly one of them runs by yelling “Tiger!” you don’t want to stand there and yell, “Are you sure?” You want to imitate them, and fast.
Highly non-conformist people probably have “defective” or low-functioning feedback loops. They simply feel less compulsion to imitate others–it doesn’t even occur to them to imitate others! These folks might die in interesting ways, but in the meanwhile, they’re good sources for ideas other people just wouldn’t have thought of. I suspect they are concentrated in the arts, though clearly some of them are in programming.
Normal people’s feedback loops kick in when they are not imitating others around them, making them feel embarrassed, awkward, or guilty. When they imitate others, their brains reward them, making them feel happy. This leads people to enjoy a variety of group-based activities, from football games to prayer circles to line dancing to political rallies.
At its extreme, these groups become “mobs,” committing violent acts that many of the folks involved wouldn’t under normal circumstances.
Highly conformist people’s feedback loops are probably over-active, making them feel awkward or uncomfortable while simply observing other people not imitating the group. This discomfort can only be relieved by getting those other people to conform. These folks tend to favor more restrictive social policies and can’t understand why other people would possibly want to do those horrible, non-conforming things.
To reiterate: this feedback system exists because helped your ancestors survive. It is not people being “sheep;” it is a perfectly sensible approach to learning about the world and avoiding dangers. And different people have stronger or weaker feedback loops, resulting in more or less instinctual desire to go along with and imitate others.
However, there are times when you shouldn’t imitate others. Times when, in fact, everyone else is wrong.
The Milgram Experiment places the subject in a situation where their instinct to obey the experimenter (an “authority figure”) is in conflict with their rational desire not to harm others (and their instinctual empathizing with the person being “electrocuted.”)
In case you have forgotten the Milgram Experiment, it went like this: an unaware subject is brought into the lab, where he meets the “scientist” and a “student,” who are really in cahoots. The subject is told that he is going to assist with an experiment to see whether administering electric shocks to the “student” will make him learn faster. The “student” also tells the student, in confidence, that he has a heart condition.
The real experiment is to see if the subject will shock the “student” to death at the “scientist’s” urging.
No actual shocks are administered, but the “student” is a good actor, making out that he is in terrible pain and then suddenly going silent, etc.
Before the experiment, Milgram polled various people, both students and “experts” in psychology, and pretty much everyone agreed that virtually no one would administer all of the shocks, even when pressured by the “scientist.”
In Milgram’s first set of experiments, 65 percent (26 of 40) of experiment participants administered the experiment’s final massive 450-volt shock, though many were very uncomfortable doing so; at some point, every participant paused and questioned the experiment; some said they would refund the money they were paid for participating in the experiment. Throughout the experiment, subjects displayed varying degrees of tension and stress. Subjects were sweating, trembling, stuttering, biting their lips, groaning, digging their fingernails into their skin, and some were even having nervous laughing fits or seizures. (bold mine)
I’m skeptical about the seizures, but the rest sounds about right. Resisting one’s own instinctual desire to obey–or putting the desire to obey in conflict with one’s other desires–creates great emotional discomfort.
So much so, that it feels really dickish to point out that dogs aren’t actually humans and we don’t actually treat them like full family members. Maybe this is just the American difficulty with shades of gray, where such an argument is seen as the moral equivalent of eating puppies for breakfast, or maybe extreme dog affection is an instinctual mental trait of healthy people, and so only abnormal weirdos claim that it sounds irrational.
As we discussed yesterday, pet ownership is normal (in that the majority of Americans own pets,) and pet owners themselves are disproportionately married suburbanites with children. However, pet ownership is also somewhat exceptional, in that Americans–particularly American whites–appear globally unique in their high degree of affection for pets.
Incidentally, 76% of dog owners have bought Christmas presents for their dogs. (I’ve even done this.)
Why do people love dogs (and other pets) so much?
The Wikipedia cites a couple of theories, eg:
Wilson’s (1984) biophilia hypothesis is based on the premise that our attachment to and interest in animals stems from the strong possibility that human survival was partly dependent on signals from animals in the environment indicating safety or threat. The biophilia hypothesis suggests that now, if we see animals at rest or in a peaceful state, this may signal to us safety, security and feelings of well-being which in turn may trigger a state where personal change and healing are possible.
Since I tend to feel overwhelmingly happy and joyful while walking in the woods, I understand where this theory comes from, but it doesn’t explain why suburban white parents like pets more than, say, single Chinese men, or why hunter-gatherers (or recently settled hunter-gatherers) aren’t the most avid pet-owners (you would think hunter-gatherers would be particularly in tune with the states of the animals around them!)
So I propose a different theory:
Pets are (mostly) toy versions of domestic animals.
Europeans–and Americans–have traditionally been engaged in small-scale farming and animal husbandry, raising chickens, pigs, cattle, horses, sheep, and occasionally goats, geese, turkeys, and ducks.
Dogs and cats held a special place on the farm. Dogs were an indispensable part of their operations, both to protect the animals and help round them up, and worked closely with the humans in farm management. Much has been written on the relationship between the shepherd and his sheep, but let us not overlook the relationship between the shepherd and his dog.
Cats also did their part, by eliminating the vermin that were attracted to the farmer’s grain.
These dogs and cats are still “working” animals rather than “pets” kept solely for their company, but they clearly enjoy a special status in the farmer’s world, helpers rather than food.
For children, raising “pets” teaches valuable sills necessary for caring for larger animals–better to make your learning mistakes when the only one dependent on you is a hamster than when it’s a whole flock of sheep and your family’s entire livelihood.
Raising pets provides an additional benefit in creating the bond between a child and dog that will eventually transform into the working relationship between farmer and farm-dog.
Empathy has probably played an important role in animal domestication–the ability to understand the animal’s point of view and care about its well being probably helps a lot when trying to raise it from infancy to adulthood. People with higher levels of empathy may have been better at domesticating animals in the first place, and living in an economy dependent on animal husbandry may have also selected for people with high levels of empathy.
In other words, people who treated their dogs well have probably been more evolutionarily successful than people who didn’t, pushing us toward instinctually treating dogs like one of the family. (Though I still think that people who sell cancer treatments for cats and dogs are taking advantage of gullible pet owners and that actually treating an animal just like a human is a bad idea. I also find it distasteful to speak of adopted dogs finding their “forever homes,” a phrase lifted from human adoption.)
However, if you’ve ever interacted with humans, you’ve probably noticed by now that some would give their dog their right kidney, and some would set a dog on fire without blinking.
(I am reminded here of the passage in Phillipe Bourgois’s In Search of Respect in which the anthropologist is shocked to discover that violent Nuyorican crack dealers think torturing animals is funny.)
I have been looking for a map showing the historical distribution of domesticated animals in different parts of the globe, but have so far failed. I’d be most grateful if anyone can find one. To speak very generally, Australia historically had no domesticated animals, South America had llamas, North America had dogs, African hunter-gatherers didn’t have any, African horticulturalists had a chicken-like animal, and then Europe/Asia/The Middle East/India/other Africans had a large variety of animals, like camels and yaks and horses and goats.
…a deletion variant of the ADRA2b gene. Carriers remember emotionally arousing images more vividly and for a longer time, and they also show more activation of the amygdala when viewing such images (Todd and Anderson, 2009; Todd et al., 2015). … Among the Shors, a Turkic people of Siberia, the incidence was 73%. Curiously, the incidence was higher in men (79%) than in women (69%). It may be that male non-carriers had a higher death rate, since the incidence increased with age (Mulerova et al., 2015). … The picture is still incomplete but the incidence of the ADRA2b deletion variant seems to range from a low of 10% in some sub-Saharan African groups to a high of 50-65% in some European groups and 55-75% in some East Asian groups. Given the high values for East Asians, I suspect this variant is not a marker for affective empathy per se but rather for empathy in general (cognitive and affective). [source]
The Shors are a small, formerly semi-nomadic group from Siberia. I haven’t found out much about them, but I bet they had dogs, like other Siberian groups.
Frost hypothesizes that extensive empathy developed as part of the suit of mental traits that made life possible in large communities of bronze-age hunter-gatherers along the Baltic:
This weak kinship zone may have arisen in prehistory along the coasts of the North Sea and the Baltic, which were once home to a unique Mesolithic culture (Price, 1991). An abundance of marine resources enabled hunter-fisher-gatherers to achieve high population densities by congregating each year in large coastal agglomerations for fishing, sealing, and shellfish collecting. Population densities were comparable in fact to those of farming societies, but unlike the latter there was much “churning” because these agglomerations formed and reformed on a yearly basis. Kinship obligations would have been insufficient to resolve disputes peaceably, to manage shared resources, and to ensure respect for social rules. Initially, peer pressure was probably used to get people to see things from the other person’s perspective. Over time, however, the pressure of natural selection would have favored individuals who more readily felt this equivalence of perspectives, the result being a progressive hardwiring of compassion and shame and their gradual transformation into empathy and guilt (Frost, 2013a; Frost, 2013b).
Empathy and guilt are brutally effective ways to enforce social rules. If one disobeys these internal overseers, the result is self-punishment that passes through three stages: anguish, depression and, ultimately, suicidal ideation. [source]
Someone has been reading a lot of Dostoyevsky. But I’m wondering if the first ingredient is actually farming/animal husbandry.
1. People with high levels of empathy may have had an easier time domesticating animals/raising domesticated animals, creating a feedback loop of increasing empathy in farming populations.
2. This empathetic connection was strongest with dogs and cats, who aren’t meat to be slaughtered but human partners.
3. Children assigned the task of raising dogs and cats bonded with their charges.
4. Modern “pets” are (living) toy versions of the working dogs and cats who once helped manage the farms.
1. Do you have a pet?
2. Do you think pets should be treated like family members/humans?
3. Would you shoot your pet for a million dollars?
B. Yes, but I would use the money to raise 100 abandoned animals out of suffering.
D. That’s a terrible question! What kind of sick fuck makes up a question like that?
Don’t get me wrong. I like animals; I just don’t like them in my house. Every time I petsit for friends with cats, I am reminded of why I don’t own cats: scooping feces is repulsive (and don’t get me started on toxoplasma Gondii!) Dogs are marginally better, in that the homes of dog owners don’t always smell of feces, but unfortunately they often smell of dog.
For this post, I am defining “pet” as animals that people keep solely for companionship. Animals kept because they do useful things or materially benefit their owners, like seeing eye dogs, egg-laying chickens, mouse-hunting cats, race horses, or dancing bears are not “pets.” Medical “therapy animals” are basically pets. It makes plenty of sense for people to keep around work animals, but pets seem to be kept around simply for the enjoyment of their company.
According to Wikipedia, Americans own approximately 94 million cats, 78 million dogs, 172 million fish, and 45 million small mammals, fish, reptiles, etc. (Though of course some of these are “useful” animals that I wouldn’t count.) This comes out to about 4x as many pets as children, concentrated in 60% of the households (most pet owners have more than one.)
Pets cost quite a bit of money–the average small dog costs about $7,000 to $13,000 over its 14 year lifespan; the average large dog costs $6,000 to $8,000 over its much shorter 8 year lifespan. [source] (Note that it is cheaper per year to own a small dog; the lower lifetime cost is due entirely to their shorter lifespans.) Cats cost about the same as dogs–people don’t spend much on “outdoor” cats, but “indoor” cats cost about $9,000 to $11,000 over their lifetimes.
Just making some rough estimates, I’d say it looks people spend $700 per year per dog or cat, which comes out to about 120 billion dollars per year. That’s a lot of money! (And this doesn’t count the expenses incurred by shelters and animal control agencies to take care of the excess pets people don’t want.)
Americans are probably exception in the number of pets they have. According to Wikipedia, 46% of the world’s pet dog population lives in the US. (By contrast, only 4.4% of the world’s human population lives in the US.) The ratio gets even more skewed if we break it down by race–63% of America’s whites own pets, versus only 49% of the non-whites. [source]
However, other countries similar to the US don’t seem as keen on pets: the %pets/%people ratio for the US is 10.5, for Canada 7.5, and for Britain, 5.8. This might have to do with factors like Britain being a more crowded country where people have less space for pets, or with the Wikipedia data being inaccurate. Either way, I think it’s safe to say that pets are very characteristically American, and especially a white American thing.
One theory about why people own so many pets is that they’re substitute children/companions/friends for lonely people who don’t have kids/spouses/friends, perhaps as a side effect of our highly atomized culture. I came into this post expecting to confirm this, but it looks like Crazy Cat Ladies are actually a relatively small percent of the overall pet-owning population.
According to Gallop, 50% of married people own a dog, and 33% own a cat (some people own both.) By contrast, only 37% of unmarried people own dogs and only 25% own cats. People with children under 18 are more likely to own pets than people without. And people from the “East” are less likely to own pets than people from the “West.” (Interestingly, “westerners” are disproportionately more likely to own cats.)
So it looks to me like most pet ownership is actually motivated by the idea that kids should have pets, with pets more common in suburban or rural areas where they have more room to run around. This is probably particularly so for cats, who are probably more likely to be “outdoor” pets or mouse-catching farm cats in rural areas (ie, the “West.”)
There is an extensive belief–perhaps folk belief–that pet ownership is good for people. Gallop found that 60% of people believe that pet owners lead more satisfying lives than non-pet owners; numerous studies claim that pet ownership–or even just occasional interaction–makes people healthier. There even exists an “animal therapy” industry. Unfortunately, the studies on the subject look rather unreliable–the ones about pet ownership are confounded by healthier people being more likely to have pets in the first place, for example.
And yet, there’s something about the notion that I find appealing; something about playing with happy puppies or petting a bunny that I find downright pleasant. Maybe it’s something as simple as animals being nice and therefore making people happy.
It’s getting late, so I’ll continue this tomorrow.
But humans are not mere action-reaction systems; they have qualia, an inner experience of being.
One of my themes here is the idea that various psychological traits, like anxiety, guilt, depression, or disgust, might not be just random things we feel, but exist for evolutionary reasons. Each of these emotions, when experienced moderately, may have beneficial effects. Guilt (and its cousin, shame,) helps us maintain our social relationships with other people, aiding in the maintenance of large societies. Disgust protects us from disease and helps direct sexual interest at one’s spouse, rather than random people. Anxiety helps people pay attention to crucial, important details, and mild depression may help people concentrate, stay out of trouble, or–very speculatively–have helped our ancestors hibernate during the winter.
In excess, each of these traits is damaging, but a shortage of each trait may also be harmful.
I have commented before on the remarkable statistic that 25% of women are on anti-depressants, and if we exclude women over 60 (and below 20,) the number of women with an “anxiety disorder” jumps over 30%.
The idea that a full quarter of us are actually mentally ill is simply staggering. I see three potential causes for the statistic:
Doctors prescribe anti-depressants willy-nilly to everyone who asks, whether they’re actually depressed or not;
Something about modern life is making people especially depressed and anxious;
Mental illnesses are side effects of common, beneficial conditions (similar to how sickle cell anemia is a side effect of protection from malaria.)
As you probably already know, sickle cell anemia is a genetic mutation that protects carriers from malaria. Imagine a population where 100% of people are sickle cell carriers–that is, they have one mutated gene, and one regular gene. The next generation in this population will be roughly 25% people who have two regular genes (and so die of malaria,) 50% of people who have one sickle cell and one regular gene (and so are protected,) and 25% of people will have two sickle cell genes and so die of sickle cell anemia. (I’m sure this is a very simplified scenario.)
So I consider it technically possible for 25% of people to suffer a pathological genetic condition, but unlikely–malaria is a particularly ruthless killer compared to being too cheerful.
Skipping to the point, I think there’s a little of all three going on. Each of us probably has some kind of personality “set point” that is basically determined by some combination of genetics, environmental assaults, and childhood experiences. People deviate from their set points due to random stuff that happens in their lives, (job promotions, visits from friends, car accidents, etc.,) but the way they respond to adversity and the mood they tend to return to afterwards is largely determined by their “set point.” This is all a fancy way of saying that people have personalities.
The influence of random chance on these genetic/environmental factors suggests that there should be variation in people’s emotional set points–we should see that some people are more prone to anxiety, some less prone, and some of average anxiousness.
Please note that this is a statistical should, in the same sense that, “If people are exposed to asbestos, some of them should get cancer,” not a moral should, as in, “If someone gives you a gift, you should send a thank-you note.”
Natural variation in a trait does not automatically imply pathology, but being more anxious or depressive or guilt-ridden than others can be highly unpleasant. I see nothing wrong, a priori, with people doing things that make their lives more pleasant and manageable (and don’t hurt others); this is, after all, why I enjoy a cup of coffee every morning. If you are a better, happier, more productive person with medication (or without it,) then carry on; this post is not intended as a critique of anyone’s personal mental health management, nor a suggestion for how to take care of your mental health.
Our medical/psychological health system, however, operates on the assumption that medications are for pathologies only. There is not form to fill out that says, “Patient would like anti-anxiety drugs in order to live a fuller, more productive life.”
That said, all of these emotions are obviously responses to actual stuff that happens in real life, and if 25% of women are coming down with depression or anxiety disorders, I think we should critically examine whether anxiety and depression are really the disease we need to be treating, or the body’s responses to some external threat.
In a mixed group, women become quieter, less assertive, and more compliant. This deference is shown only to men and not to other women in the group. A related phenomenon is the sex gap in self-esteem: women tend to feel less self-esteem in all social settings. The gap begins at puberty and is greatest in the 15-18 age range (Hopcroft, 2009).
If more women enter the workforce–either because they think they ought to or because circumstances force them to–and the workforce triggers depression, then as the percent of women formally employed goes up, we should see a parallel rise in mental illness rates among women. Just as Adderal and Ritalin help little boys conform to the requirements of modern classrooms, Prozac and Lithium help women cope with the stress of employment.
As we discussed yesterday, fever is not a disease, but part of your body’s system for re-asserting homeostasis by killing disease microbes and making it more difficult for them to reproduce. Extreme fevers are an over-reaction and can kill you, but a normal fever below 104 degrees or so is merely unpleasant and should be allowed to do its work of making you better. Treating a normal fever (trying to lower it) interferes with the body’s ability to fight the disease and results in longer sicknesses.
Likewise, these sorts of emotions, while definitely unpleasant, may serve some real purpose.
We humans are social beings (and political animals.) We do not exist on our own; historically, loneliness was not merely unpleasant, but a death sentence. Humans everywhere live in communities and depend on each other for survival. Without refrigeration or modern storage methods, saving food was difficult. (Unless you were an Eskimo.) If you managed to kill a deer while on your own, chances are you couldn’t eat it all before it began to rot, and then your chances of killing another deer before you started getting seriously hungry were low. But if you share your deer with your tribesmates, none of the deer goes to waste, and if they share their deer with yours, you are far less likely to go hungry.
If you end up alienated from the rest of your tribe, there’s a good chance you’ll die. It doesn’t matter if they were wrong and you were right; it doesn’t matter if they were jerks and you were the nicest person ever. If you can’t depend on them for food (and mates!) you’re dead. This is when your emotions kick in.
People complain a lot that emotions are irrational. Yes, they are. They’re probably supposed to be. There is nothing “logical” or “rational” about feeling bad because someone is mad at you over something they did wrong! And yet it happens. Not because it is logical, but because being part of the tribe is more important than who did what to whom. Your emotions exist to keep you alive, not to prove rightness or wrongness.
This is, of course, an oversimplification. Men and women have been subject to different evolutionary pressures, for example. But this is close enough for the purposes of the current conversation.
If modern people are coming down with mental illnesses at astonishing rates, then maybe there is something about modern life that is making people ill. If so, treating the symptoms may make life more bearable for people while they are subject to the disease, but still does not fundamentally address whatever it is that is making them sick in the first place.
It is my own opinion that modern life is pathological, not (in most cases,) people’s reactions to it. Modern life is pathological because it is new and therefore you aren’t adapted to it. Your ancestors have probably only lived in cities of millions of people for a few generations at most (chances are good that at least one of your great-grandparents was a farmer, if not all of them.) Naturescapes are calming and peaceful; cities noisy, crowded, and full of pollution. There is some reason why schizophrenics are found in cities and not on farms. This doesn’t mean that we should just throw out cities, but it does mean we should be thoughtful about them and their effects.
People seem to do best, emotionally, when they have the support of their kin, some degree of ethnic or national pride, economic and physical security, attend religious services, and avoid crowded cities. (Here I am, an atheist, recommending church for people.) The knowledge you are at peace with your tribe and your tribe has your back seems almost entirely absent from most people’s modern lives; instead, people are increasingly pushed into environments where they have no tribe and most people they encounter in daily life have no connection to them. Indeed, tribalism and city living don’t seem to get along very well.
To return to healthy lives, we may need to re-think the details of modernity.
Philosophically and politically, I am a great believer in moderation and virtue as the ethical, conscious application of homeostatic systems to the self and to organizations that exist for the sake of humans. Please understand that this is not moderation in the conventional sense of “sometimes I like the Republicans and sometimes I like the Democrats,” but the self-moderation necessary for bodily homeostasis reflected at the social/organizational/national level.
For example, I have posted a bit on the dangers of mass immigration, but this is not a call to close the borders and allow no one in. Rather, I suspect that there is an optimal amount–and kind–of immigration that benefits a community (and this optimal quantity will depend on various features of the community itself, like size and resources.) Thus, each community should aim for its optimal level. But since virtually no one–certainly no one in a position of influence–advocates for zero immigration, I don’t devote much time to writing against it; it is only mass immigration that is getting pushed on us, and thus mass immigration that I respond to.
Similarly, there is probably an optimal level of communal genetic diversity. Too low, and inbreeding results. Too high, and fetuses miscarry due to incompatible genes. (Rh- mothers have difficulty carrying Rh+ fetuses, for example, because their immune systems identify the fetus’s blood as foreign and therefore attack it, killing the fetus.) As in agriculture, monocultures are at great risk of getting wiped out by disease; genetic heterogeneity helps ensure that some members of a population can survive a plague. Homogeneity helps people get along with their neighbors, but too much may lead to everyone thinking through problems in similar ways. New ideas and novel ways of attacking problems often come from people who are outliers in some way, including genetics.
There is a lot of talk ’round these parts that basically blames all the crimes of modern civilization on females. Obviously I have a certain bias against such arguments–I of course prefer to believe that women are superbly competent at all things, though I do not wish to stake the functioning of civilization on that assumption. If women are good at math, they will do math; if they are good at leading, they will lead. A society that tries to force women into professions they are not inclined to is out of kilter; likewise, so is a society where women are forced out of fields they are good at. Ultimately, I care about my doctor’s competence, not their gender.
In a properly balanced society, male and female personalities complement each other, contributing to the group’s long-term survival.
Women are not accidents of nature; they are as they are because their personalities succeeded where women with different personalities did not. Women have a strong urge to be compassionate and nurturing toward others, maintain social relations, and care for those in need of help. These instincts have, for thousands of years, helped keep their families alive.
When the masculine element becomes too strong, society becomes too aggressive. Crime goes up; unwinable wars are waged; people are left to die. When the feminine element becomes too strong, society becomes too passive; invasions go unresisted; welfare spending becomes unsustainable. Society can’t solve this problem by continuing to give both sides everything they want, (this is likely to be economically disastrous,) but must actually find a way to direct them and curb their excesses.
I remember an article on the now-defunct neuropolitics (now that I think of it, the Wayback Machine probably has it somewhere,) on an experiment where groups with varying numbers of ‘liberals” and “conservatives” had to work together to accomplish tasks. The “conservatives” tended to solve their problems by creating hierarchies that organized their labor, with the leader/s giving everyone specific tasks. The “liberals” solved their problems by incorporating new members until they had enough people to solve specific tasks. The groups that performed best, overall, were those that had a mix of ideologies, allowing them to both make hierarchical structures to organize their labor and incorporate new members when needed. I don’t remember much else of the article, nor did I read the original study, so I don’t know what exactly the tasks were, or how reliable this study really was, but the basic idea of it is appealing: organize when necessary; form alliances when necessary. A good leader recognizes the skills of different people in their group and uses their authority to direct the best use of these skills.
Our current society greatly lacks in this kind of coherent, organizing direction. Most communities have very little in the way of leadership–moral, spiritual, philosophical, or material–and our society seems constantly intent on attacking and tearing down any kind of hierarchies, even those based on pure skill and competence. Likewise, much of what passes for “leadership” is people demanding that you do what they say, not demonstrating any kind of competence. But when we do find competent leaders, we would do well to let them lead.
Peter Frost recently posted on female shyness among men–more specifically, on the observation that adolescent white females appear to become very shy among groups of males and suffer depression, but adolescent black females don’t.
Frost theorizes that women are instinctually deferential to men, especially when they are economically dependent on them, and that whites show more of this deference than blacks because traditional white marriage patterns–monogamy–have brought women into more contact with men while making them more economically dependent on them than traditional African marriage patterns–polygyny–and therefore white women have evolved to have more shyness.
This explanation is decent, but feels incomplete.
Did anyone bother to ask the girls why they felt shy around the boys? Probably someone has, but that information wasn’t included in the post. But I can share my own experiences.
For starters, I’ve never felt–and this may just be me–particularly shyer around males than around females, nor do I recall ever talking less in highschool due to class composition. Rather, the amount I talked had entirely to do with how much I liked the subject matter vs. how tired I was. However, in non-school settings, I am less likely to talk when conversations are dominated by men, simply because men tend to talk about things I find boring, like cars, sports, or finance. (I suspect I have an unusually high tolerance for finance/economic discussions for a female, but there are limits to what even I can stand, and the other two topics drive me to tears of boredom. Sports, as far as I am concerned, are the Kardashians of men.) I am sure the same is true in reverse–when groups of women get together, they talk about stuff that men find horribly dull.
Even in classroom conversations that are ostensibly led by the teacher, male students may make responses that just aren’t interesting to the female students, leading to the females getting bored or having little to say in response.
So, do black adolescent girls and boys have more conversation topics in common than whites?
Second, related to Frost’s observations, men tend to be more aggressive while talking than women. They are louder, they interrupt more, they put less effort into assuaging people’s feelings, etc. I am sure women do things men find annoying, like ramble on forever without getting to the point or talking about their feelings in these weirdly associative ways. Regardless, I suspect that women/adolescents (at least white ones) often find the male style overwhelming, and their response is to retreat.
When feminists say they need “safe spaces” away from men to discuss their feminism things, they aren’t entirely inaccurate. It’s just that society used to have these “safe spaces” for women back before the feminists themselves destroyed them! Even now, it is easy to join a Mommy Meetup group or find an all-female Bible study club. But, oh wait, these are regressive! What we need are all-female lawyers, or doctors, or mathematicians…
*Ahem* back on subject, if testosterone => aggression, it would be interesting to see if the difference in black vs white females is simply a result of different testosterone levels (though of course that is just kicking the ball back a bit, because we then must ask what causes different testosterone levels.)
I suspect that Frost is on the right track looking at polygyny vs. monogamy, but I think his mechanism (increased time around/dependence on men => increase shyness) is incomplete. He’s missed something from his own work: polygynous males have higher testosterone than monogamous ones (even within their own society.) (See: The Contradictions of Polygyny and Polygyny Makes Men Bigger, Tougher, and Meaner.) Even if women in polygynous societies were expected to behave exactly like women from monogamous societies, I’d expect some “spillover” effect from the higher testosterone in their men–that is, everyone in society ought to have higher testosterone levels than they would otherwise.
Additionally, let us consider that polygyny is not practiced the same everywhere. In the Middle East, sexual access to women is tightly controlled–to the point where women may be killed for extra-marital sexual activity. In this case, the women are effectively monogamous, while the men are not. By contrast, in the societies Frost describes from Sub-Saharan Africa, it sounds like both men and women have a great many sexual partners during adolescence and early adulthood (which explains the high STD rates.)
If polygamy increases male aggression and testosterone levels because them men have to invest more energy into finding mates, then it stands to reason that women who have lots of mates are also investing lots of energy into finding them, and so would also have increased levels of aggression and testosterone.
Speaking again from personal experience, I observed that my own desire to talk to men basically cratered after I got married (and then had kids.) Suddenly something about it seemed vaguely tawdry. Of course, this leaves me in a bit of a pickle, because there aren’t that many moms who want to discuss HBD or related topics. (Thankfully I have the internet, because talking to words on a screen is a very different dynamic.) Of course, if I were back on the dating market again (god forbid!) I’d have to talk to lots of men again.
So I think the equation here shouldn’t be +time with men => +shyness, -time with men => -shyness, but +pursuit of partners => +aggression, -pursuit of partners => -aggression.
None of this gets into the “depression” issue. What’s up with that?
Personally, while I felt plenty of annoying things during highschool, the only ones triggered by boys were of the wanting to fall in love variety and the feeling sad if someone didn’t like me variety. I did feel some distress over wanting the adults to treat me like an adult, but that has nothing to do with boys. But this may just be me being odd.
We know that whites, women, and the subset of white women suffer from depression, anxiety, and other forms of mental illness at higher rates than blacks, men, and pretty much everyone else. I speculate that anxiety, shyness, disgust, and possibly even depression are part of a suite of traits that help women women avoid male aggression, perform otherwise dull tasks like writing English papers or washing dishes, keep out of trouble, and stay interested in their husbands and only their husbands.
In a society where monogamy is enforced, people (or their parents) may even preferrentially chose partners who seem unlikely to stray–that is, women (or men) who display little interest in actively pursuing the opposite sex. So just as women in polygamous societies may be under selective pressure to become more aggressive, women in monogamous societies may be under selective pressure to have less interest in talking to men.
Eventually, you get Japan.
Amusingly, the studies Frost quotes view white female shyness as a bad thing to be corrected, and black female non-shyness as a good thing that mysteriously exists despite adverse conditions. But what are the effects of white female shyness? Do white women go to prison, become pregnant out of wedlock, or get killed by their partners at higher rates than black women? Do they get worse grades, graduate from school at lower rates, or end up in worse professions?
Or maybe shy girls are perfectly fine the way they are and don’t need fixing.