So I was thinking about taste (flavor) and disgust (emotion.)
As I mentioned about a month ago, 25% of people are “supertasters,” that is, better at tasting than the other 75% of people. Supertasters experience flavors more intensely than ordinary tasters, resulting in a preference for “bland” food (food with too much flavor is “overwhelming” to them.) They also have a more difficult time getting used to new foods.
One of my work acquaintances of many years –we’ll call her Echo–is obese, constantly on a diet, and constantly eats sweets. She knows she should eat vegetables and tries to do so, but finds them bitter and unpleasant, and so the general outcome is as you expect: she doesn’t eat them.
Since I find most vegetables quite tasty, I find this attitude very strange–but I am willing to admit that I may be the one with unusual attitudes toward food.
Echo is also quite conservative.
This got me thinking about vegetarians vs. people who think vegetarians are crazy. Why (aside from novelty of the idea) should vegetarians be liberals? Why aren’t vegetarians just people who happen to really like vegetables?
What if there were something in preference for vegetables themselves that correlated with political ideology?
Certainly we can theorize that “supertaster” => “vegetables taste bitter” => “dislike of vegetables” => “thinks vegetarians are crazy.” (Some supertasters might think meat tastes bad, but anecdotal evidence doesn’t support this; see also Wikipedia, where supertasting is clearly associated with responses to plants:
Any evolutionary advantage to supertasting is unclear. In some environments, heightened taste response, particularly to bitterness, would represent an important advantage in avoiding potentially toxic plant alkaloids. In other environments, increased response to bitterness may have limited the range of palatable foods. …
Although individual food preference for supertasters cannot be typified, documented examples for either lessened preference or consumption include:
Mushrooms? Echo was just complaining about mushrooms.
Let’s talk about disgust. Disgust is an important reaction to things that might infect or poison you, triggering reactions from scrunching up your face to vomiting (ie, expelling the poison.) We process disgust in our amygdalas, and some people appear to have bigger or smaller amygdalas than others, with the result that the folks with more amygdalas feel more disgust.
Humans also route a variety of social situations through their amygdalas, resulting in the feeling of “disgust” in response to things that are not rotten food, like other people’s sexual behaviors, criminals, or particularly unattractive people. People with larger amygdalas also tend to find more human behaviors disgusting, and this disgust correlates with social conservatism.
To what extent are “taste” and “disgust” independent of each other? I don’t know; perhaps they are intimately linked into a single feedback system, where disgust and taste sensitivity cause each other, or perhaps they are relatively independent, so that a few unlucky people are both super-sensitive to taste and easily disgusted.
People who find other people’s behavior disgusting and off-putting may also be people who find flavors overwhelming, prefer bland or sweet foods over bitter ones, think vegetables are icky, vegetarians are crazy, and struggle to stay on diets.
What’s that, you say, I’ve just constructed a just-so story?
Michael Shin and William McCarthy, researchers from UCLA, have found an association between counties with higher levels of support for the 2012 Republican presidential candidate and higher levels of obesity in those counties.
Shin and McCarthy’s map of obesity vs. political orientation
Looks like the Mormons and Southern blacks are outliers.
(I don’t really like maps like this for displaying data; I would much prefer a simple graph showing orientation on one axis and obesity on the other, with each county as a datapoint.)
(Unsurprisingly, the first 49 hits I got when searching for correlations between political orientation and obesity were almost all about what other people think of fat people, not what fat people think. This is probably because researchers tend to be skinny people who want to fight “fat phobia” but aren’t actually interested in the opinions of fat people.)
The 15 most caffeinated cities, from I love Coffee–note that Phoenix is #7, not #1.
Liberals are 28 percent more likely than conservatives to eat fresh fruit daily, and 17 percent more likely to eat toast or a bagel in the morning, while conservatives are 20 percent more likely to skip breakfast.
Ten percent of liberals surveyed indicated they are vegetarians, compared with 3 percent of conservatives.
Liberals are 28 percent more likely than conservatives to enjoy beer, with 60 percent of liberals indicating they like beer.
(See above where Wikipedia noted that supertasters dislike beer.) I will also note that coffee, which supertasters tend to dislike because it is too bitter, is very popular in the ultra-liberal cities of Portland and Seattle, whereas heavily sweetened iced tea is practically the official beverage of the South.
The only remaining question is if supertasters are conservative. That may take some research.
Update: I have not found, to my disappointment, a simple study that just looks at correlation between ideology and supertasting (or nontasting.) However, I have found a couple of useful items.
Standard tests of disgust sensitivity, a questionnaire developed for this research assessing different types of moral transgressions (nonvisceral, implied-visceral, visceral) with the terms “angry” and “grossed-out,” and a taste sensitivity test of 6-n-propylthiouracil (PROP) were administered to 102 participants. [PROP is commonly used to test for “supertasters.”] Results confirmed past findings that the more sensitive to PROP a participant was the more disgusted they were by visceral, but not moral, disgust elicitors. Importantly, the findings newly revealed that taste sensitivity had no bearing on evaluations of moral transgressions, regardless of their visceral nature, when “angry” was the emotion primed. However, when “grossed-out” was primed for evaluating moral violations, the more intense PROP tasted to a participant the more “grossed-out” they were by all transgressions. Women were generally more disgust sensitive and morally condemning than men, … The present findings support the proposition that moral and visceral disgust do not share a common oral origin, but show that linguistic priming can transform a moral transgression into a viscerally repulsive event and that susceptibility to this priming varies as a function of an individual’s sensitivity to the origins of visceral disgust—bitter taste. [bold mine.]
In other words, supertasters are more easily disgusted, and with verbal priming will transfer that disgust to moral transgressions. (And easily disgusted people tend to be conservatives.)
While previous studies found that inherited taste-blindness to bitter compounds such
as PROP may be a risk factor for obesity, this literature has been hotly disputed
(Keller et al. 2010).
(Always remember, of course, that a great many social-science studies ultimately do not replicate.)
If you aren’t familiar with the “replication crisis,” in social psychology, start here, here, and here.
I consider the courses I took in college on quantitative and qualitative methods the most important of my undergraduate years. I learned thereby a great many important things about how not to conduct an experiment and how to think about experimental methodology (not to mention statistics.)
If I were putting together a list of “general education” requirements I wanted all students to to take in order to declare them well-educated and ready to go out into the world, it’d be a course on Quantitative and Qualitative Methods. (Much like current “gen ed” and “distribution requirements,” the level of mathematical ability required would likely vary by field, though no one should be obtaining a college degree without some degree of numerical competence.)
But the real problem with the social science fields is not lack of rigorous statistical background, but overwhelming ideological conformity, enforced by the elders of the fields–advisers, hiring committees, textbook writers, journal editors, etc., who all believe in the same ideology and so have come to see their field as “proving” their ideology.
Ideology drives both the publication biases and the wishful thinking that underlie this crisis. For example, everyone in “Women’s studies” is a feminist who believes that “science” proves that women are oppressed because everyone they know has done studies “proving” it. You’re not going to find a lot of Women’s Studies professors aiming for tenure on the basis of their successful publication of a bunch of studies that failed to find any evidence of bias against women. Findings like that => no publication => no tenure. And besides, feminist professors see it as their moral duty to prove that discrimination exists, not to waste their time on studies that just happened not to be good enough to find the effect.
In the Social Sciences more generally, we get this “post modern” mish-mash of everything from Marxists to Freudians to folks who like Foucault and Said, where the goal is to mush up long-winded descriptions of otherwise simple phenomena into endless Chomsky Sentences.
(Just reading the Wikipedia pages on a variety of Social Science oriented topics reveals how very little real research or knowledge is generated in these fields, and how much is based on individual theorists’ personal views. It is often obvious that virtually anyone not long steeped in the academic literature of these fields would not come up with these theories, but with something far more mundane and sensible. Economists, for all their political bias, at least provide a counterpoint to many of these theories.)
Obviously different fields study different aspects of phenomena, but entire fields should not become reduced to trying to prove one political ideology or another. If they are, they should label themselves explicitly, rather than make a pretense of neutrality.
When ideology rather than correctness become the standard for publication (not to mention hiring and tenure,) the natural result is incorrectness.
More statistical knowledge is not, by itself, going to resolve the problem. The fields must first recognize that they have an ideological bias problem, and then work to remedy it by letting in and publishing work by researchers outside the social science ideological mainstream. It is very easy to think your ideas sound rigorous when you are only debating with people who already agree with you; it is much more difficult to defend your views against people who disagree, or come from very different intellectual backgrounds.
They could start with–hahahaha–letting in a Republican.
Continuing with yesterday’s discussion (in response to a reader’s question):
Why are people snobs about intelligence?
Is math ability better than verbal?
Do people only care about intelligence in the context of making money?
1. People are snobs. Not all of them, obviously–just a lot of them.
So we’re going to have to back this up a step and ask why are people snobs, period.
Paying attention to social status–both one’s own and others’–is probably instinctual. We process social status in our prefrontal cortexes–the part of our brain generally involved in complex thought, imagination, long-term planning, personality, not being a psychopath, etc. Our brains respond positively to images of high-status items–activating reward-feedback loop that make us feel good–and negatively to images of low-status items–activating feedback loops that make us feel bad.
The mental effect is stronger when we perform high-status actions in front of others:
…researchers asked a person if the following statement was an accurate description of themselves: “I wouldn’t hesitate to go out of my way to help someone in trouble.” Some of the participants answered the question without anyone else seeing their response. Others knowingly revealed their answer to two strangers who were watching in a room next to them via video feed. The result? When the test subjects revealed an affirmative answer to an audience, their [medial prefrontal cortexes] lit up more strongly than when they kept their answers to themselves. Furthermore, when the participants revealed their positive answers not to strangers, but to those they personally held in high regard, their MPFCs and reward striatums activated even more strongly. This confirms something you’ve assuredly noticed in your own life: while we generally care about the opinions of others, we particularly care about the opinions of people who really matter to us.
(Note what constitutes a high-status activity.)
But this alone does not prove that paying attention to social status is instinctual. After all, I can also point to the part of your brain that processes written words (the Visual Word Form Area,) and yet I don’t assert that literacy is an instinct. For that matter, anything we think about has to be processed in our brains somewhere, whether instinct or not.
Better evidence comes from anthropology and zoology. According to Wikipedia, “All societies have a form of social status,” even hunter-gatherers. If something shows up in every single human society, that’s a pretty good sign that it is probably instinctual–and if it isn’t, it is so useful a thing that no society exists without it.
Even animals have social status–“Social status hierarchies have been documented in a wide range of animals: apes,[7] baboons,[8] wolves,[9] cows/bulls,[10] hens,[11] even fish,[12] and ants.[13]” We may also add horses, many monkey species, elephants, killer whales, reindeer, and probably just about all animals that live in large groups.
Among animals, social status is generally determined by a combination of physical dominance, age, relationship, and intelligence. Killer whale pods, for example, are led by the eldest female in the family; leadership in elephant herds is passed down from a deceased matriarch to her eldest daughter, even if the matriarch has surviving sisters. Male lions assert dominance by being larger and stronger than other lions.
In all of these cases, the social structure exists because it benefits the group, even if it harms some of the individuals in it. If having no social structure were beneficial for wolves, then wolf packs without alpha wolves would out-compete packs with alphas. This is the essence of natural selection.
Among humans, social status comes in two main forms, which I will call “earned” and “background.”
“Earned” social status stems from things you do, like rescuing people from burning buildings, inventing quantum physics, or stealing wallets. High status activities are generally things that benefit others, and low-status activities are generally those that harm others. This is why teachers are praised and thieves are put in prison.
Earned social status is a good thing, because it reward people for being helpful.
“Background” social status is basically stuff you were born into or have no effect over, like your race, gender, the part of the country you grew up in, your accent, name, family reputation, health/disability, etc.
Americans generally believe that you should not judge people based on background social status, but they do it, anyway.
Interestingly, high-status people are not generally violent. (Just compare crime rates by neighborhood SES.) Outside of military conquest, violence is the domain of the low-class and those afraid they are slipping in social class, not the high class. Compare Andrea Merkel to the average German far-right protester. Obviously the protester would win in a fist-fight, but Merkel is still in charge. High class people go out of their way to donate to charity, do volunteer work, and talk about how much they love refugees. In the traditional societies of the Pacific Northwest, they held potlatches at which they distributed accumulated wealth to their neighbors; in our society, the wealthy donate millionsto education. Ideally, in a well-functioning system, status is the thanks rich people get for doing things that benefit the community instead of spending their billions on gold-plated toilets.
The Arabian babbler … spends most of its life in small groups of three to 20 members. These groups lay their eggs in a communal nest and defend a small territory of trees and shrubs that provide much-needed safety from predators.
When it’s living as part of a group, a babbler does fairly well for itself. But babblers who get kicked out of a group have much bleaker prospects. These “non-territorials” are typically badgered away from other territories and forced out into the open, where they often fall prey to hawks, falcons, and other raptors. So it really pays to be part of a group. … Within a group, babblers assort themselves into a linear and fairly rigid dominance hierarchy, i.e., a pecking order. When push comes to shove, adult males always dominate adult females — but mostly males compete with males and females with females. Very occasionally, an intense “all-out” fight will erupt between two babblers of adjacent rank, typically the two highest-ranked males or the two highest-ranked females. …
Most of the time, however, babblers get along pretty well with each other. In fact, they spend a lot of effort actively helping one another and taking risks for the benefit of the group. They’ll often donate food to other group members, for example, or to the communal nestlings. They’ll also attack foreign babblers and predators who have intruded on the group’s territory, assuming personal risk in an effort to keep others safe. One particularly helpful activity is “guard duty,” in which one babbler stands sentinel at the top of a tree, watching for predators while the rest of the group scrounges for food. The babbler on guard duty not only foregoes food, but also assumes a greater risk of being preyed upon, e.g., by a hawk or falcon. …
Unlike chickens, who compete to secure more food and better roosting sites for themselves, babblers compete to give food away and to take the worst roosting sites. Each tries to be more helpful than the next. And because it’s a competition, higher-ranked (more dominant) babblers typically win, i.e., by using their dominance to interfere with the helpful activities of lower-ranked babblers. This competition is fiercest between babblers of adjacent rank. So the alpha male, for example, is especially eager to be more helpful than the beta male, but doesn’t compete nearly as much with the gamma male. Similar dynamics occur within the female ranks.
In the eighteenth and early nineteenth century, wealthy private individuals substantially supported the military, with a particular wealthy men buying stuff for a particular regiment or particular fort.
Noblemen paid high prices for military commands, and these posts were no sinecure. You got the obligation to substantially supply the logistics for your men, the duty to obey stupid orders that would very likely lead to your death, the duty to lead your men from in front while wearing a costume designed to make you particularly conspicuous, and the duty to engage in honorable personal combat, man to man, with your opposite number who was also leading his troops from in front.
A vestige of this tradition remains in that every English prince has been sent to war and has placed himself very much in harm’s way.
It seems obvious to me that a soldier being led by a member of the ruling class who is soaking up the bullets from in front is a lot more likely to be loyal and brave than a soldier sent into battle by distant rulers safely in Washington who despise him as a sexist homophobic racist murderer, that a soldier who sees his commander, a member of the ruling classes, fighting right in front of him, is reflexively likely to fight.
(Note, however, that magnanimity is not the same as niceness. The only people who are nice to everyone are store clerks and waitresses, and they’re only nice because they have to be or they’ll get fired.)
Most people are generally aware of each others’ social statuses, using contextual clues like clothing and accents to make quick, rough estimates. These contextual clues are generally completely neutral–they just happen to correlate with other behaviors.
For example, there is nothing objectively good or bad for society about wearing your pants belted beneath your buttocks, aside from it being an awkward way to wear your pants. But the style correlates with other behaviors, like crime, drug use, and aggression, low paternal investment, and unemployment, all of which are detrimental to society, and so the mere sight of underwear spilling out of a man’s pants automatically assigns him low status. There is nothing causal in this relationship–being a criminal does not make you bad at buckling your pants, nor does wearing your pants around your knees somehow inspire you to do drugs. But these things correlate, and humans are very good at learning patterns.
Likewise, there is nothing objectively better about operas than Disney movies, no real difference between a cup of coffee brewed in the microwave and one from Starbucks; a Harley Davidson and a Vespa are both motorcycles; and you can carry stuff around in just about any bag or backpack, but only the hoity-toity can afford something as objectively hideous as a $26,000 Louis Vutton backpack.
All of these things are fairly arbitrary and culturally dependent–the way you belt your pants can’t convey social status in a society where people don’t wear pants; your taste in movies couldn’t matter before movies were invented. Among hunter-gatherers, social status is based on things like one’s skills at hunting, and if I showed up to the next PTA meeting wearing a tophat and monocle, I wouldn’t get any status points at all.
We tend to aggregate the different social status markers into three broad classes (middle, upper, and lower.) As Scott Alexander says in his post about Siderea’s essay on class in America, which divides the US into 10% Underclass, 65% Working Class, 23.5% Gentry Class, and 1.5% Elite:
Siderea notes that Church’s analysis independently reached about the same conclusion as Paul Fussell’s famous guide. I’m not entirely sure how you’d judge this (everybody’s going to include lower, middle, and upper classes), but eyeballing Fussell it does look a lot like Church, so let’s grant this.
It also doesn’t sound too different from Marx. Elites sound like capitalists, Gentry like bourgeoisie, Labor like the proletariat, and the Underclass like the lumpenproletariat. Or maybe I’m making up patterns where they don’t exist; why should the class system of 21st century America be the same as that of 19th century industrial Europe?
There’s one more discussion of class I remember being influenced by, and that’s Unqualified Reservations’ Castes of the United States. Another one that you should read but that I’ll summarize in case you don’t:
1. Dalits are the underclass, … 2. Vaisyas are standard middle-class people … 3. Brahmins are very educated people … 4. Optimates are very rich WASPs … now they’re either extinct or endangered, having been pretty much absorbed into the Brahmins. …
Michael Church’s system (henceforth MC) and the Unqualified Reservation system (henceforth UR) are similar in some ways. MC’s Underclass matches Dalits, MC’s Labor matches Vaisyas, MC’s Gentry matches Brahmins, and MC’s Elite matches Optimates. This is a promising start. It’s a fourth independent pair of eyes that’s found the same thing as all the others. (commenters bring up Joel Kotkin and Archdruid Report as similar convergent perspectives).
I suspect the tendency to try to describe society as consisting of three broad classes (with the admission that other, perhaps tiny classes that don’t exactly fit into the others might exist) is actually just an artifact of being a three-biased society that likes to group things in threes (the Trinity, three-beat joke structure, three bears, Three Musketeers, three notes in a chord, etc.) This three-bias isn’t a human universal (or so I have read) but has probably been handed down to us from the Indo-Europeans, (“Many Indo-European societies know a threefold division of priests, a warriorclass, and a class of peasants or husbandmen. Georges Dumézil has suggested such a division for Proto-Indo-European society,”) so we’re so used to it that we don’t even notice ourselves doing it.
(For more information on our culture’s three-bias and different number biases in other cultures, see Alan Dundes’s Interpreting Folklore, though I should note that I read it back in highschool and so my memory of it is fuzzy.)
(Also, everyone is probably at least subconsciously cribbing Marx, who was probably cribbing from some earlier guy who cribbed from another earlier guy, who set out with the intention of demonstrating that society–divided into nobles, serfs, and villagers–reflected the Trinity, just like those Medieval maps that show the world divided into three parts or the conception of Heaven, Hell, and Purgatory.)
At any rate, I am skeptical of any system that lumps 65% of people into one social class and 0.5% of people into a different social class as being potentially too-finely grained at one end of the scale and not enough at the other. Determining the exact number of social classes in American society may ultimately be futile–perhaps there really are three (or four) highly distinct groups, or perhaps social classes transition smoothly from one to the next with no sharp divisions.
I lean toward the latter theory, with broad social classes as merely a convenient shorthand for extremely broad generalizations about society. If you look any closer, you tend to find that people do draw finer-grained distinctions between themselves and others than “65% Working Class” would imply. For example, a friend who works in agriculture in Greater Appalachia once referred dismissively to other people they had to deal with as “red necks.” I might not be able to tell what differentiates them, but clearly my friend could. Similarly, I am informed that there are different sorts of homelessness, from true street living to surviving in shelters, and that lifetime homeless people are a different breed altogether. I might call them all “homeless,” but to the homeless, these distinctions are important.
Is social class evil?
This question was suggested by a different friend.
I suspect that social class is basically, for the most part, neutral-to-useful. I base this on the fact that most people do not work very hard to erase markers of class distinction, but instead actively embrace particular class markers. (Besides, you can’t get rid of it, anyway.)
It is not all that hard to learn the norms and values of a different social class and strategically employ them. Black people frequently switch between speaking African American Vernacular English at home and standard English at work; I can discuss religion with Christian conservatives and malevolent AI risk with nerds; you can purchase a Harley Davidson t-shirt as easily as a French beret and scarf.
(I am reminded here of an experiment in which researchers were looking to document cab drivers refusing to pick up black passengers; they found that when the black passengers were dressed nicely, drivers would pick them up, but when they wore “ghetto” clothes, the cabs wouldn’t. Cabbies: responding more to perceived class than race.)
And yet, people don’t–for the most part–mass adopt the social markers of the upper class just to fool them. They love their motorcycle t-shirts, their pumpkin lattes, even their regional accents. Class markers are an important part of peoples’ cultural / tribal identities.
But what about class conflicts?
Because every class has its own norms and values, every class is, to some degree, disagreeing with the other classes. People for whom frugality and thrift are virtues will naturally think that people who drink overpriced coffee are lacking in moral character. People for whom anti-racism is the highest virtue will naturally think that Trump voters are despicable racists. A Southern Baptist sees atheists as morally depraved fetus murderers; nerds and jocks are famously opposed to each other; and people who believe that you should graduate from college, become established in your career, get married, and then have 0-1.5 children disapprove of people who drop out of highschool, have a bunch of children with a bunch of different people, and go on welfare.
A moderate sense of pride in one’s own culture is probably good and healthy, but spending too much energy hating other groups is probably negative–you may end up needlessly hurting people whose cooperation you would have benefited from, reducing everyone’s well-being.
(A good chunk of our political system’s dysfunctions are probably due to some social classes believing that other social classes despise them and are voting against their interests, and so counter-voting to screw over the first social class. I know at least one person who switched allegiance from Hillary to Trump almost entirely to stick it to liberals they think look down on them for classist reasons.)
Ultimately, though, social class is with us whether we like it or not. Even if a full generation of orphan children were raised with no knowledge of their origins and completely equal treatment by society at large, each would end up marrying/associating with people who have personalities similar to themselves (and remember that genetics plays a large role in personality.) Just as current social classes in America are ethnically different, (Southern whites are drawn from different European populations than Northern whites, for example,) so would the society resulting from our orphanage experiment differentiate into genetically and personalityish-similar groups.
Why do Americans generally proclaim their opposition to judging others based on background status, and then act classist, anyway? There are two main reasons.
As already discussed, different classes have real disagreements with each other. Even if I think I shouldn’t judge others, I can’t put aside my moral disgust at certain behaviors just because they happen to correlate with different classes.
It sounds good to say nice, magnanimous things that make you sound more socially sensitive and aware than others, like, “I wouldn’t hesitate to go out of my way to help someone in trouble.” So people like to say these things whether they really mean them or not.
In reality, people are far less magnanimous than they like to claim they are in front of their friends. People like to say that we should help the homeless and save the whales and feed all of the starving children in Africa, but few people actually go out of their way to do such things.
There is a reason Mother Teresa is considered a saint, not an archetype.
In real life, not only does magnanimity has a cost, (which the rich can better afford,) but if you don’t live up to your claims, people will notice. If you talk a good talk about loving others but actually mistreat them, people will decide that you’re a hypocrite. On the internet, you can post memes for free without havng to back them up with real action, causing discussions to descend into competitive-virtue signalling in which no one wants to be the first person to admit that they actually are occasionally self-interested. (Cory Doctorow has a relevant discussion about how “reputations economies”–especially internet-based ones–can go horribly wrong.)
Unfortunately, people often confuse background and achieved status.
American society officially has no hereditary social classes–no nobility, no professions limited legally to certain ethnicities, no serfs, no Dalits, no castes, etc. Officially, if you can do the job, you are supposed to get it.
Most of us believe, at least abstractly, that you shouldn’t judge or discriminate against others for background status factors they have no control over, like where they were born, the accent thy speak with, or their skin tone. If I have two resumes, one from someone named Lakeesha, and the other from someone named Ian William Esquire III, I am supposed to consider each on their merits, rather than the connotations their names invoke.
But because “status” is complicated, people often go beyond advocating against “background” status and also advocate that we shouldn’t accord social status for any reasons. That is, full social equality.
This is not possible and would be deeply immoral in practice.
When you need heart surgery, you really hope that the guy cutting you open is a top-notch heart surgeon. When you’re flying in an airplane, you hope that both the pilot and the guys who built the plane are highly skilled. Chefs must be good at cooking and authors good at writing.
These are all forms of earned status, and they are good.
Smart people are valuable to society because they do nice things like save you from heart attacks or invent cell-phones. This is not “winning at capitalism;” this is benefiting everyone around them. In this context, I’m happy to let smart people have high status.
In a hunter-gatherer society, smart people are the ones who know the most about where animals live and how to track them, how to get water during a drought, and where that 1-inch stem they spotted last season that means a tasty underground tuber is located. Among nomads, smart people are the ones with the biggest mental maps of the territory, the folks who know the safest and quickest routes from good summer pasture to good winter pasture, how to save an animal from dying and how to heal a sick person. Among pre-literate people, smart people composed epic poems that entertained their neighbors for many winters’ nights, and among literate ones, the smart people became scribes and accountants. Even the communists valued smart people, when they weren’t chopping their heads off for being bourgeois scum.
So even if we say, abstractly, “I value all people, no matter how smart they are,” the smart people do more of the stuff that benefits society than the dumb people, which means they end up with higher social status.
So, yes, high IQ is a high social status marker, and low IQ is a low social status marker, and thus at least some people will be snobs about signaling their IQ and their disdain for dumb people.
BUT.
I am speaking here very abstractly. There are plenty of “high status” people who are not benefiting society at all. Plenty of people who use their status to destroy society while simultaneously enriching themselves. And yes, someone can come into a community, strip out all of its resources and leave behind pollution and unemployment, and happily call it “capitalism” and enjoy high status as a result.
I would be very happy if we could stop engaging in competitive holiness spirals and stop lionizing people who became wealthy by destroying communities. I don’t want capitalism at the expense of having a pleasant place to live in.
As we were discussing yesterday, I theorize that people have neural feedback loops that reward them for conforming/imitating others/obeying authorities and punish them for disobeying/not conforming.
This leads people to obey authorities or go along with groups even when they know, logically, that they shouldn’t.
There are certainly many situations in which we want people to conform even though they don’t want to, like when my kids have to go to bed or buckle their seatbelts–as I said yesterday, the feedback loop exists because it is useful.
But there are plenty of situations where we don’t want people to conform, like when trying to brainstorm new ideas.
Under what conditions will people disobey authority?
But in person, people may disobey authorities when they have some other social systtem to fall back on. If disobeying an authority in Society A means I lose social status in Society A, I will be more likely to disobey if I am a member in good standing in Society B.
If I can use my disobedience against Authority A as social leverage to increase my standing in Society B, then I am all the more likely to disobey. A person who can effectively stand up to an authority figure without getting punished must be, our brains reason, a powerful person, an authority in their own right.
Teenagers do this all the time, using their defiance against adults, school, teachers, and society in general to curry higher social status among other teenagers, the people they actually care about impressing.
SJWs do this, too:
I normally consider the president of Princeton an authority figure, and even though I probably disagree with him on far more political matters than these students do, I’d be highly unlikely to be rude to him in real life–especially if I were a student he could get expelled from college.
But if I had an outside audience–Society B–clapping and cheering for me behind the scenes, the urge to obey would be weaker. And if yelling at the President of Princeton could guarantee me high social status, approval, job offers, etc., then there’s a good chance I’d do it.
But then I got to thinking: Are there any circumstances under which these students would have accepted the president’s authority?
Obviously if the man had a proven track record of competently performing a particular skill the students wished to learn, they might follow hi example.
Or not.
If authority works via neural feedback loops, employing some form of “mirror neurons,” do these systems activate more strongly when the people we are perceiving look more like ourselves (or our internalized notion of people in our “tribe” look like, since mirrors are a recent invention)?
In other words, what would a cross-racial version of the Milgram experiment look like?
Unfortunately, it doesn’t look like anyone has tried it (and to do it properly, it’d need to be a big experiment, involving several “scientists” of different races [so that the study isn’t biased by one “scientist” just being bad at projecting authority] interacting with dozens of students of different races, which would be a rather large undertaking.) I’m also not finding any studies on cross-racial authority (I did find plenty of websites offering practical advice about different groups’ leadership styles,) though I’m sure someone has studied it.
However, I did find cross-racial experiments on empathy, which may involve the same brain systems, and so are suggestive:
Using transcranial magnetic stimulation, we explored sensorimotor empathic brain responses in black and white individuals who exhibited implicit but not explicit ingroup preference and race-specific autonomic reactivity. We found that observing the pain of ingroup models inhibited the onlookers’ corticospinal system as if they were feeling the pain. Both black and white individuals exhibited empathic reactivity also when viewing the pain of stranger, very unfamiliar, violet-hand models. By contrast, no vicarious mapping of the pain of individuals culturally marked as outgroup members on the basis of their skin color was found. Importantly, group-specific lack of empathic reactivity was higher in the onlookers who exhibited stronger implicit racial bias.
Using the event-related potential (ERP) approach, we tracked the time-course of white participants’ empathic reactions to white (own-race) and black (other-race) faces displayed in a painful condition (i.e. with a needle penetrating the skin) and in a nonpainful condition (i.e. with Q-tip touching the skin). In a 280–340 ms time-window, neural responses to the pain of own-race individuals under needle penetration conditions were amplified relative to neural responses to the pain of other-race individuals displayed under analogous conditions.
In this study, we used functional magnetic resonance imaging (fMRI) to investigate how people perceive the actions of in-group and out-group members, and how their biased view in favor of own team members manifests itself in the brain. We divided participants into two teams and had them judge the relative speeds of hand actions performed by an in-group and an out-group member in a competitive situation. Participants judged hand actions performed by in-group members as being faster than those of out-group members, even when the two actions were performed at physically identical speeds. In an additional fMRI experiment, we showed that, contrary to common belief, such skewed impressions arise from a subtle bias in perception and associated brain activity rather than decision-making processes, and that this bias develops rapidly and involuntarily as a consequence of group affiliation. Our findings suggest that the neural mechanisms that underlie human perception are shaped by social context.
None of these studies shows definitevely whether or not in-group vs. out-group biases are an inherent feature of neurological systems, or Avenanti’s finding that people were more empathetic toward a purple-skinned person than to a member of a racial out-group suggests that some amount of learning is involved in the process–and that rather than comparing people against one’s in-group, we may be comparing them against our out-group.
At any rate, you may get similar outcomes either way.
In cases where you want to promote group cohesion and obedience, it may be beneficial to sort people by self-identity.
In cases where you want to guard against groupthink, obedience, or conformity, it may be beneficial to mix up the groups. Intellectual diversity is great, but even ethnic diversity may help people resist defaulting to obedience, especially when they know they shouldn’t.
Using data from two panel studies on U.S. firms and an online experiment, we examine investor reactions to increases in board diversity. Contrary to conventional wisdom, we find that appointing female directors has no impact on objective measures of performance, such as ROA, but does result in a systematic decrease in market value.
(Solal argues that investors may perceive the hiring of women–even competent ones–as a sign that the company is pursuing social justice goals instead of money-making goals and dump the stock.)
Additionally, diverse companies may find it difficult to work together toward a common goal–there is a good quantity of evidence that increasing diversity decreases trust and inhibits group cohesion. EG, from The downside of diversity:
IT HAS BECOME increasingly popular to speak of racial and ethnic diversity as a civic strength. From multicultural festivals to pronouncements from political leaders, the message is the same: our differences make us stronger.
But a massive new study, based on detailed interviews of nearly 30,000 people across America, has concluded just the opposite. Harvard political scientist Robert Putnam — famous for “Bowling Alone,” his 2000 book on declining civic engagement — has found that the greater the diversity in a community, the fewer people vote and the less they volunteer, the less they give to charity and work on community projects. In the most diverse communities, neighbors trust one another about half as much as they do in the most homogenous settings. The study, the largest ever on civic engagement in America, found that virtually all measures of civic health are lower in more diverse settings.
As usual, I suspect there is an optimum level of diversity–depending on a group’s purpose and its members’ preferences–that helps minimize groupthink while still preserving most of the benefits of cohesion.
So I was thinking the other day about the question of why do people go along with others and do things even when they know they believe (or know) they shouldn’t. As Tolstoy asks, why did the French army go along with this mad idea to invade Russia in 1812? Why did Milgram’s subjects obey his orders to “electrocute” people? Why do I feel emotionally distressed when refusing to do something, even when I have very good reasons to refuse?
As I mentioned ages ago, I suspect that normal people have neural circuits that reward them for imitating others and punish them for failing to imitate. Mirror neurons probably play a critical role in this process, but probably aren’t the complete story.
These feedback loops are critical for learning–infants only a few months old begin the process of learning to talk by moving their mouths and making “ba ba” noises in imitation of their parents. (Hence why it is called “babbling.”) They do not consciously say to themselves, “let me try to communicate with the big people by making their noises;” they just automatically move their faces to match the faces you make at them. It’s an instinct.
You probably do this, too. Just watch what happens when one person in a room yawns and then everyone else feels compelled to do it, too. Or if you suddenly turn and look at something behind the group of people you’re with–others will likely turn and look, too.
Autistic infants have trouble with imitation, (and according to Wikipedia, several studies have found abnormalities in their mirror neuron systems, though I suspect the matter is far from settled–among other things, I am not convinced that everyone with an ASD diagnosis actually has the same thing going on.) Nevertheless, there is probably a direct link between autistic infants’ difficulties with imitation and their difficulties learning to talk.
For adults, imitation is less critical (you can, after all, consciously decide to learn a new language,) but still important for survival. If everyone in your village drinks out of one well and avoids the other well, even if no one can explain why, it’s probably a good idea to go along and only drink out of the “good” well. Something pretty bad probably happened to the last guy who drank out of the “bad” well, otherwise the entire village wouldn’t have stopped drinking out of it. If you’re out picking berries with your friends when suddenly one of them runs by yelling “Tiger!” you don’t want to stand there and yell, “Are you sure?” You want to imitate them, and fast.
Highly non-conformist people probably have “defective” or low-functioning feedback loops. They simply feel less compulsion to imitate others–it doesn’t even occur to them to imitate others! These folks might die in interesting ways, but in the meanwhile, they’re good sources for ideas other people just wouldn’t have thought of. I suspect they are concentrated in the arts, though clearly some of them are in programming.
Normal people’s feedback loops kick in when they are not imitating others around them, making them feel embarrassed, awkward, or guilty. When they imitate others, their brains reward them, making them feel happy. This leads people to enjoy a variety of group-based activities, from football games to prayer circles to line dancing to political rallies.
Normal people having fun by synchronizing their bodily movements.
At its extreme, these groups become “mobs,” committing violent acts that many of the folks involved wouldn’t under normal circumstances.
Highly conformist people’s feedback loops are probably over-active, making them feel awkward or uncomfortable while simply observing other people not imitating the group. This discomfort can only be relieved by getting those other people to conform. These folks tend to favor more restrictive social policies and can’t understand why other people would possibly want to do those horrible, non-conforming things.
To reiterate: this feedback system exists because helped your ancestors survive. It is not people being “sheep;” it is a perfectly sensible approach to learning about the world and avoiding dangers. And different people have stronger or weaker feedback loops, resulting in more or less instinctual desire to go along with and imitate others.
However, there are times when you shouldn’t imitate others. Times when, in fact, everyone else is wrong.
The Milgram Experiment places the subject in a situation where their instinct to obey the experimenter (an “authority figure”) is in conflict with their rational desire not to harm others (and their instinctual empathizing with the person being “electrocuted.”)
In case you have forgotten the Milgram Experiment, it went like this: an unaware subject is brought into the lab, where he meets the “scientist” and a “student,” who are really in cahoots. The subject is told that he is going to assist with an experiment to see whether administering electric shocks to the “student” will make him learn faster. The “student” also tells the student, in confidence, that he has a heart condition.
The real experiment is to see if the subject will shock the “student” to death at the “scientist’s” urging.
No actual shocks are administered, but the “student” is a good actor, making out that he is in terrible pain and then suddenly going silent, etc.
Before the experiment, Milgram polled various people, both students and “experts” in psychology, and pretty much everyone agreed that virtually no one would administer all of the shocks, even when pressured by the “scientist.”
In Milgram’s first set of experiments, 65 percent (26 of 40) of experiment participants administered the experiment’s final massive 450-volt shock,[1] though many were very uncomfortable doing so; at some point, every participant paused and questioned the experiment; some said they would refund the money they were paid for participating in the experiment. Throughout the experiment, subjects displayed varying degrees of tension and stress. Subjects were sweating, trembling, stuttering, biting their lips, groaning, digging their fingernails into their skin, and some were even having nervous laughing fits or seizures. (bold mine)
I’m skeptical about the seizures, but the rest sounds about right. Resisting one’s own instinctual desire to obey–or putting the desire to obey in conflict with one’s other desires–creates great emotional discomfort.
So much so, that it feels really dickish to point out that dogs aren’t actually humans and we don’t actually treat them like full family members. Maybe this is just the American difficulty with shades of gray, where such an argument is seen as the moral equivalent of eating puppies for breakfast, or maybe extreme dog affection is an instinctual mental trait of healthy people, and so only abnormal weirdos claim that it sounds irrational.
As we discussed yesterday, pet ownership is normal (in that the majority of Americans own pets,) and pet owners themselves are disproportionately married suburbanites with children. However, pet ownership is also somewhat exceptional, in that Americans–particularly American whites–appear globally unique in their high degree of affection for pets.
Incidentally, 76% of dog owners have bought Christmas presents for their dogs. (I’ve even done this.)
Why do people love dogs (and other pets) so much?
The Wikipedia cites a couple of theories, eg:
Wilson’s (1984) biophilia hypothesis is based on the premise that our attachment to and interest in animals stems from the strong possibility that human survival was partly dependent on signals from animals in the environment indicating safety or threat. The biophilia hypothesis suggests that now, if we see animals at rest or in a peaceful state, this may signal to us safety, security and feelings of well-being which in turn may trigger a state where personal change and healing are possible.
Since I tend to feel overwhelmingly happy and joyful while walking in the woods, I understand where this theory comes from, but it doesn’t explain why suburban white parents like pets more than, say, single Chinese men, or why hunter-gatherers (or recently settled hunter-gatherers) aren’t the most avid pet-owners (you would think hunter-gatherers would be particularly in tune with the states of the animals around them!)
So I propose a different theory:
Pets are (mostly) toy versions of domestic animals.
Europeans–and Americans–have traditionally been engaged in small-scale farming and animal husbandry, raising chickens, pigs, cattle, horses, sheep, and occasionally goats, geese, turkeys, and ducks.
Dogs and cats held a special place on the farm. Dogs were an indispensable part of their operations, both to protect the animals and help round them up, and worked closely with the humans in farm management. Much has been written on the relationship between the shepherd and his sheep, but let us not overlook the relationship between the shepherd and his dog.
Cats also did their part, by eliminating the vermin that were attracted to the farmer’s grain.
These dogs and cats are still “working” animals rather than “pets” kept solely for their company, but they clearly enjoy a special status in the farmer’s world, helpers rather than food.
For children, raising “pets” teaches valuable sills necessary for caring for larger animals–better to make your learning mistakes when the only one dependent on you is a hamster than when it’s a whole flock of sheep and your family’s entire livelihood.
Raising pets provides an additional benefit in creating the bond between a child and dog that will eventually transform into the working relationship between farmer and farm-dog.
Empathy has probably played an important role in animal domestication–the ability to understand the animal’s point of view and care about its well being probably helps a lot when trying to raise it from infancy to adulthood. People with higher levels of empathy may have been better at domesticating animals in the first place, and living in an economy dependent on animal husbandry may have also selected for people with high levels of empathy.
In other words, people who treated their dogs well have probably been more evolutionarily successful than people who didn’t, pushing us toward instinctually treating dogs like one of the family. (Though I still think that people who sell cancer treatments for cats and dogs are taking advantage of gullible pet owners and that actually treating an animal just like a human is a bad idea. I also find it distasteful to speak of adopted dogs finding their “forever homes,” a phrase lifted from human adoption.)
However, if you’ve ever interacted with humans, you’ve probably noticed by now that some would give their dog their right kidney, and some would set a dog on fire without blinking.
(I am reminded here of the passage in Phillipe Bourgois’s In Search of Respect in which the anthropologist is shocked to discover that violent Nuyorican crack dealers think torturing animals is funny.)
I have been looking for a map showing the historical distribution of domesticated animals in different parts of the globe, but have so far failed. I’d be most grateful if anyone can find one. To speak very generally, Australia historically had no domesticated animals, South America had llamas, North America had dogs, African hunter-gatherers didn’t have any, African horticulturalists had a chicken-like animal, and then Europe/Asia/The Middle East/India/other Africans had a large variety of animals, like camels and yaks and horses and goats.
…a deletion variant of the ADRA2b gene. Carriers remember emotionally arousing images more vividly and for a longer time, and they also show more activation of the amygdala when viewing such images (Todd and Anderson, 2009; Todd et al., 2015). … Among the Shors, a Turkic people of Siberia, the incidence was 73%. Curiously, the incidence was higher in men (79%) than in women (69%). It may be that male non-carriers had a higher death rate, since the incidence increased with age (Mulerova et al., 2015). … The picture is still incomplete but the incidence of the ADRA2b deletion variant seems to range from a low of 10% in some sub-Saharan African groups to a high of 50-65% in some European groups and 55-75% in some East Asian groups. Given the high values for East Asians, I suspect this variant is not a marker for affective empathy per se but rather for empathy in general (cognitive and affective). [source]
The Shors are a small, formerly semi-nomadic group from Siberia. I haven’t found out much about them, but I bet they had dogs, like other Siberian groups.
Frost hypothesizes that extensive empathy developed as part of the suit of mental traits that made life possible in large communities of bronze-age hunter-gatherers along the Baltic:
This weak kinship zone may have arisen in prehistory along the coasts of the North Sea and the Baltic, which were once home to a unique Mesolithic culture (Price, 1991). An abundance of marine resources enabled hunter-fisher-gatherers to achieve high population densities by congregating each year in large coastal agglomerations for fishing, sealing, and shellfish collecting. Population densities were comparable in fact to those of farming societies, but unlike the latter there was much “churning” because these agglomerations formed and reformed on a yearly basis. Kinship obligations would have been insufficient to resolve disputes peaceably, to manage shared resources, and to ensure respect for social rules. Initially, peer pressure was probably used to get people to see things from the other person’s perspective. Over time, however, the pressure of natural selection would have favored individuals who more readily felt this equivalence of perspectives, the result being a progressive hardwiring of compassion and shame and their gradual transformation into empathy and guilt (Frost, 2013a; Frost, 2013b).
Empathy and guilt are brutally effective ways to enforce social rules. If one disobeys these internal overseers, the result is self-punishment that passes through three stages: anguish, depression and, ultimately, suicidal ideation. [source]
Someone has been reading a lot of Dostoyevsky. But I’m wondering if the first ingredient is actually farming/animal husbandry.
To sum:
1. People with high levels of empathy may have had an easier time domesticating animals/raising domesticated animals, creating a feedback loop of increasing empathy in farming populations.
2. This empathetic connection was strongest with dogs and cats, who aren’t meat to be slaughtered but human partners.
3. Children assigned the task of raising dogs and cats bonded with their charges.
4. Modern “pets” are (living) toy versions of the working dogs and cats who once helped manage the farms.
Poll time!
1. Do you have a pet?
2. Do you think pets should be treated like family members/humans?
3. Would you shoot your pet for a million dollars?
A. Never!
B. Yes, but I would use the money to raise 100 abandoned animals out of suffering.
C. Yes.
D. That’s a terrible question! What kind of sick fuck makes up a question like that?
Much like religion and nationalism, I don’t really have the emotional impulses necessary to really get into the idea of a holiday dedicated to eating turkey. Maybe this is just my personal failing, or a side effect of not being a farmer, but either way here I am, grumbling under my breath about how I’d rather be getting stuff done than eat.
Nevertheless, I observe that other people seem to like holidays. They spend large amounts of money on them, decorate their houses, voluntarily travel to see relatives, and otherwise “get into the holiday mood.” While some of this seems to boil down to simple materialism, there does seem to be something more: people really do like their celebrations. I may not be able to hear the music, but I can still tell that people are dancing.
And if so many people are dancing, and they seem healthy and happy and well-adjusted, then perhaps dancing is a good thing.
The point of Thanksgiving, a made-up holiday, (though it does have its roots in real harvest celebrations,) is to celebrate the connection between family and nation. This is obvious enough, since Thanksgiving unifies “eating dinner with my family” with “founding myth of the United States.” We tell the story of the Pilgrims, not because they are everyone’s ancestors, but because they represent the symbolic founding of the nation. (My Jamestown ancestors actually got here first, but I guess Virginia was not in Lincoln’s good graces when he decided to make a holiday.)
In the founding mythos, the Pilgrims are brave, freedom-loving people who overcome tremendous odds to found a new nation, with the help of their new friends, the Indians.
Is the founding mythos true?
It doesn’t matter. Being “literally true” is not the point of a myth. The Iliad did not become one of the most popular books of all time because it provides a 100% accurate account of the Trojan war, but because it describes heroism, bravery, and conversely, cowardice. (“Hektor” has always been high on my names list.) Likewise, the vast majority of Christians do not take the Bible 100% literally (even the ones who claim they do.) Arguing about which day God created Eve misses the point of the creation story; arguing about whether the Exodus happened exactly as told misses the point of the story held for a people in exile.
The story of Thanksgiving instructs us to work hard, protect liberty, and be friends with the Indians. It reminds us both of the Pilgrims’ utopian goal of founding the perfect Christian community, a shining city upon the hill, and of the value of religious tolerance. (Of course, the Puritans would probably not have been keen on religious tolerance or freedom of religion, given that they exiled Anne Hutchins for talking too much about God.)
Most of us today probably aren’t descended from the Pilgrims, but the ritual creates a symbolic connection between them and us, for we are the heirs of the civilization they began. Likewise, each family is connected to the nation as a whole; without America, we wouldn’t be here, eating this turkey together.
Unless you don’t like turkey. In which case, have some pie.
But humans are not mere action-reaction systems; they have qualia, an inner experience of being.
One of my themes here is the idea that various psychological traits, like anxiety, guilt, depression, or disgust, might not be just random things we feel, but exist for evolutionary reasons. Each of these emotions, when experienced moderately, may have beneficial effects. Guilt (and its cousin, shame,) helps us maintain our social relationships with other people, aiding in the maintenance of large societies. Disgust protects us from disease and helps direct sexual interest at one’s spouse, rather than random people. Anxiety helps people pay attention to crucial, important details, and mild depression may help people concentrate, stay out of trouble, or–very speculatively–have helped our ancestors hibernate during the winter.
In excess, each of these traits is damaging, but a shortage of each trait may also be harmful.
I have commented before on the remarkable statistic that 25% of women are on anti-depressants, and if we exclude women over 60 (and below 20,) the number of women with an “anxiety disorder” jumps over 30%.
The idea that a full quarter of us are actually mentally ill is simply staggering. I see three potential causes for the statistic:
Doctors prescribe anti-depressants willy-nilly to everyone who asks, whether they’re actually depressed or not;
Something about modern life is making people especially depressed and anxious;
Mental illnesses are side effects of common, beneficial conditions (similar to how sickle cell anemia is a side effect of protection from malaria.)
As you probably already know, sickle cell anemia is a genetic mutation that protects carriers from malaria. Imagine a population where 100% of people are sickle cell carriers–that is, they have one mutated gene, and one regular gene. The next generation in this population will be roughly 25% people who have two regular genes (and so die of malaria,) 50% of people who have one sickle cell and one regular gene (and so are protected,) and 25% of people will have two sickle cell genes and so die of sickle cell anemia. (I’m sure this is a very simplified scenario.)
So I consider it technically possible for 25% of people to suffer a pathological genetic condition, but unlikely–malaria is a particularly ruthless killer compared to being too cheerful.
Skipping to the point, I think there’s a little of all three going on. Each of us probably has some kind of personality “set point” that is basically determined by some combination of genetics, environmental assaults, and childhood experiences. People deviate from their set points due to random stuff that happens in their lives, (job promotions, visits from friends, car accidents, etc.,) but the way they respond to adversity and the mood they tend to return to afterwards is largely determined by their “set point.” This is all a fancy way of saying that people have personalities.
The influence of random chance on these genetic/environmental factors suggests that there should be variation in people’s emotional set points–we should see that some people are more prone to anxiety, some less prone, and some of average anxiousness.
Please note that this is a statistical should, in the same sense that, “If people are exposed to asbestos, some of them should get cancer,” not a moral should, as in, “If someone gives you a gift, you should send a thank-you note.”
Natural variation in a trait does not automatically imply pathology, but being more anxious or depressive or guilt-ridden than others can be highly unpleasant. I see nothing wrong, a priori, with people doing things that make their lives more pleasant and manageable (and don’t hurt others); this is, after all, why I enjoy a cup of coffee every morning. If you are a better, happier, more productive person with medication (or without it,) then carry on; this post is not intended as a critique of anyone’s personal mental health management, nor a suggestion for how to take care of your mental health.
Our medical/psychological health system, however, operates on the assumption that medications are for pathologies only. There is not form to fill out that says, “Patient would like anti-anxiety drugs in order to live a fuller, more productive life.”
That said, all of these emotions are obviously responses to actual stuff that happens in real life, and if 25% of women are coming down with depression or anxiety disorders, I think we should critically examine whether anxiety and depression are really the disease we need to be treating, or the body’s responses to some external threat.
In a mixed group, women become quieter, less assertive, and more compliant. This deference is shown only to men and not to other women in the group. A related phenomenon is the sex gap in self-esteem: women tend to feel less self-esteem in all social settings. The gap begins at puberty and is greatest in the 15-18 age range (Hopcroft, 2009).
If more women enter the workforce–either because they think they ought to or because circumstances force them to–and the workforce triggers depression, then as the percent of women formally employed goes up, we should see a parallel rise in mental illness rates among women. Just as Adderal and Ritalin help little boys conform to the requirements of modern classrooms, Prozac and Lithium help women cope with the stress of employment.
As we discussed yesterday, fever is not a disease, but part of your body’s system for re-asserting homeostasis by killing disease microbes and making it more difficult for them to reproduce. Extreme fevers are an over-reaction and can kill you, but a normal fever below 104 degrees or so is merely unpleasant and should be allowed to do its work of making you better. Treating a normal fever (trying to lower it) interferes with the body’s ability to fight the disease and results in longer sicknesses.
Likewise, these sorts of emotions, while definitely unpleasant, may serve some real purpose.
We humans are social beings (and political animals.) We do not exist on our own; historically, loneliness was not merely unpleasant, but a death sentence. Humans everywhere live in communities and depend on each other for survival. Without refrigeration or modern storage methods, saving food was difficult. (Unless you were an Eskimo.) If you managed to kill a deer while on your own, chances are you couldn’t eat it all before it began to rot, and then your chances of killing another deer before you started getting seriously hungry were low. But if you share your deer with your tribesmates, none of the deer goes to waste, and if they share their deer with yours, you are far less likely to go hungry.
If you end up alienated from the rest of your tribe, there’s a good chance you’ll die. It doesn’t matter if they were wrong and you were right; it doesn’t matter if they were jerks and you were the nicest person ever. If you can’t depend on them for food (and mates!) you’re dead. This is when your emotions kick in.
People complain a lot that emotions are irrational. Yes, they are. They’re probably supposed to be. There is nothing “logical” or “rational” about feeling bad because someone is mad at you over something they did wrong! And yet it happens. Not because it is logical, but because being part of the tribe is more important than who did what to whom. Your emotions exist to keep you alive, not to prove rightness or wrongness.
This is, of course, an oversimplification. Men and women have been subject to different evolutionary pressures, for example. But this is close enough for the purposes of the current conversation.
If modern people are coming down with mental illnesses at astonishing rates, then maybe there is something about modern life that is making people ill. If so, treating the symptoms may make life more bearable for people while they are subject to the disease, but still does not fundamentally address whatever it is that is making them sick in the first place.
It is my own opinion that modern life is pathological, not (in most cases,) people’s reactions to it. Modern life is pathological because it is new and therefore you aren’t adapted to it. Your ancestors have probably only lived in cities of millions of people for a few generations at most (chances are good that at least one of your great-grandparents was a farmer, if not all of them.) Naturescapes are calming and peaceful; cities noisy, crowded, and full of pollution. There is some reason why schizophrenics are found in cities and not on farms. This doesn’t mean that we should just throw out cities, but it does mean we should be thoughtful about them and their effects.
People seem to do best, emotionally, when they have the support of their kin, some degree of ethnic or national pride, economic and physical security, attend religious services, and avoid crowded cities. (Here I am, an atheist, recommending church for people.) The knowledge you are at peace with your tribe and your tribe has your back seems almost entirely absent from most people’s modern lives; instead, people are increasingly pushed into environments where they have no tribe and most people they encounter in daily life have no connection to them. Indeed, tribalism and city living don’t seem to get along very well.
To return to healthy lives, we may need to re-think the details of modernity.
Politics
Philosophically and politically, I am a great believer in moderation and virtue as the ethical, conscious application of homeostatic systems to the self and to organizations that exist for the sake of humans. Please understand that this is not moderation in the conventional sense of “sometimes I like the Republicans and sometimes I like the Democrats,” but the self-moderation necessary for bodily homeostasis reflected at the social/organizational/national level.
For example, I have posted a bit on the dangers of mass immigration, but this is not a call to close the borders and allow no one in. Rather, I suspect that there is an optimal amount–and kind–of immigration that benefits a community (and this optimal quantity will depend on various features of the community itself, like size and resources.) Thus, each community should aim for its optimal level. But since virtually no one–certainly no one in a position of influence–advocates for zero immigration, I don’t devote much time to writing against it; it is only mass immigration that is getting pushed on us, and thus mass immigration that I respond to.
Similarly, there is probably an optimal level of communal genetic diversity. Too low, and inbreeding results. Too high, and fetuses miscarry due to incompatible genes. (Rh- mothers have difficulty carrying Rh+ fetuses, for example, because their immune systems identify the fetus’s blood as foreign and therefore attack it, killing the fetus.) As in agriculture, monocultures are at great risk of getting wiped out by disease; genetic heterogeneity helps ensure that some members of a population can survive a plague. Homogeneity helps people get along with their neighbors, but too much may lead to everyone thinking through problems in similar ways. New ideas and novel ways of attacking problems often come from people who are outliers in some way, including genetics.
There is a lot of talk ’round these parts that basically blames all the crimes of modern civilization on females. Obviously I have a certain bias against such arguments–I of course prefer to believe that women are superbly competent at all things, though I do not wish to stake the functioning of civilization on that assumption. If women are good at math, they will do math; if they are good at leading, they will lead. A society that tries to force women into professions they are not inclined to is out of kilter; likewise, so is a society where women are forced out of fields they are good at. Ultimately, I care about my doctor’s competence, not their gender.
In a properly balanced society, male and female personalities complement each other, contributing to the group’s long-term survival.
Women are not accidents of nature; they are as they are because their personalities succeeded where women with different personalities did not. Women have a strong urge to be compassionate and nurturing toward others, maintain social relations, and care for those in need of help. These instincts have, for thousands of years, helped keep their families alive.
When the masculine element becomes too strong, society becomes too aggressive. Crime goes up; unwinable wars are waged; people are left to die. When the feminine element becomes too strong, society becomes too passive; invasions go unresisted; welfare spending becomes unsustainable. Society can’t solve this problem by continuing to give both sides everything they want, (this is likely to be economically disastrous,) but must actually find a way to direct them and curb their excesses.
I remember an article on the now-defunct neuropolitics (now that I think of it, the Wayback Machine probably has it somewhere,) on an experiment where groups with varying numbers of ‘liberals” and “conservatives” had to work together to accomplish tasks. The “conservatives” tended to solve their problems by creating hierarchies that organized their labor, with the leader/s giving everyone specific tasks. The “liberals” solved their problems by incorporating new members until they had enough people to solve specific tasks. The groups that performed best, overall, were those that had a mix of ideologies, allowing them to both make hierarchical structures to organize their labor and incorporate new members when needed. I don’t remember much else of the article, nor did I read the original study, so I don’t know what exactly the tasks were, or how reliable this study really was, but the basic idea of it is appealing: organize when necessary; form alliances when necessary. A good leader recognizes the skills of different people in their group and uses their authority to direct the best use of these skills.
Our current society greatly lacks in this kind of coherent, organizing direction. Most communities have very little in the way of leadership–moral, spiritual, philosophical, or material–and our society seems constantly intent on attacking and tearing down any kind of hierarchies, even those based on pure skill and competence. Likewise, much of what passes for “leadership” is people demanding that you do what they say, not demonstrating any kind of competence. But when we do find competent leaders, we would do well to let them lead.
Time Preference isn’t sexy and exciting, like anything related to, well, sex. It isn’t controversial like IQ and gender. In fact, most of the ink spilled on the subject isn’t even found in evolutionary or evolutionary psychology texts, but over in economics papers about things like interest rates that no one but economists would want to read.
So why do I think Time Preference is so important?
Because I think Low Time Preference is the true root of high intelligence.
First, what is Time Preference?
Time Preference (aka future time orientation, time discounting, delay discounting, temporal discounting,) is the degree to which you value having a particular item today versus having it tomorrow. “High time preference” means you want things right now, whereas “low time preference” means you’re willing to wait.
A relatively famous test of Time Preference is to offer a child a cookie right now, but tell them they can have two cookies if they wait 10 minutes. Some children take the cookie right now, some wait ten minutes, and some try to wait ten minutes but succumb to the cookie right now about halfway through.
Obviously, many factors can influence your Time Preference–if you haven’t eaten in several days, for example, you’ll probably not only eat the cookie right away, but also start punching me until I give you the second cookie. If you don’t like cookies, you won’t have any trouble waiting for another, but you won’t have much to do with it. Etc. But all these things held equal, your basic inclination toward high or low time preference is probably biological–and by “biological,” I mean, “mostly genetic.”
The scientists train rats to touch pictures with their noses in return for sugar cubes. Picture A gives them one cube right away, while picture B gives them more cubes after a delay. If the delay is too long or the reward too small, the rats just take the one cube right away. But there’s a sweet spot–apparently 4 cubes after a short wait—where the rats will figure it’s worth their while to tap picture B instead of picture A.
But if you snip the connection between the rats’ hippocampi and nucleus accumbenses, suddenly they lose all ability to wait for sugar cubes and just eat their sugar cubes right now, like a pack of golden retrievers in a room full of squeaky toys. They become completely unable to wait for the better payout of four sugar cubes, no matter how much they might want to.
So we know that this connection between the hippocampus and the nucleus accumbens is vitally important to your Time Orientation, though I don’t know what other modifications, such as low hippocampal volume or low nucleus accumbens would do.
So what do the hippocampus and nucleus accumbens do?
According to the Wikipedia, the hippocampus plays an important part in inhibition, memory, and spatial orientation. People with damaged hippocampi become amnesiacs, unable to form new memories.There is a pretty direct relationship between hippocampus size and memory, as documented primarily in old people:
“There is, however, a reliable relationship between the size of the hippocampus and memory performance — meaning that not all elderly people show hippocampal shrinkage, but those who do tend to perform less well on some memory tasks.[71] There are also reports that memory tasks tend to produce less hippocampal activation in elderly than in young subjects.[71] Furthermore, a randomized-control study published in 2011 found that aerobic exercise could increase the size of the hippocampus in adults aged 55 to 80 and also improve spatial memory.” (wikipedia)
Amnesiacs (and Alzheimer’s patients) also get lost a lot, which seems like a perfectly natural side effect of not being able to remember where you are, except that rat experiments show something even more interesting: specific cells that light up as the rats move around, encoding data about where they are.
“Neural activity sampled from 30 to 40 randomly chosen place cells carries enough information to allow a rat’s location to be reconstructed with high confidence.” (wikipedia)
“Spatial firing patterns of 8 place cells recorded from the CA1 layer of a rat. The rat ran back and forth along an elevated track, stopping at each end to eat a small food reward. Dots indicate positions where action potentials were recorded, with color indicating which neuron emitted that action potential.” (from Wikipedia)
According to Wikipedia, the Inhibition function theory is a little older, but seems like a perfectly reasonable theory to me.
“[Inhibition function theory] derived much of its justification from two observations: first, that animals with hippocampal damage tend to be hyperactive; second, that animals with hippocampal damage often have difficulty learning to inhibit responses that they have previously been taught, especially if the response requires remaining quiet as in a passive avoidance test.”
This is, of course, exactly what the scientists found when they separated the rats’ hippocampi from their nucleus accumbenses–they lost all ability to inhibit their impulses in order to delay gratification, even for a better payout.
In other word, the hippocampus lets you learn, process the moment of objects through space (spatial reasoning) and helps you suppress your inhibitions–that is, it is directly involved in IQ and Time Preference.
Dopaminergic input from the VTA modulate the activity of neurons within the nucleus accumbens. These neurons are activated directly or indirectly by euphoriant drugs (e.g., amphetamine, opiates, etc.) and by participating in rewarding experiences (e.g., sex, music, exercise, etc.).[11][12] …
The shell of the nucleus accumbens is involved in the cognitive processing of motivational salience (wanting) as well as reward perception and positive reinforcement effects.[6] Particularly important are the effects of drug and naturally rewarding stimuli on the NAc shell because these effects are related to addiction.[6]Addictive drugs have a larger effect on dopamine release in the shell than in the core.[6] The specific subset of ventral tegmental area projection neurons that synapse onto the D1-type medium spiny neurons in the shell are responsible for the immediate perception of the rewarding property of a stimulus (e.g., drug reward).[3][4] …
The nucleus accumbens core is involved in the cognitive processing of motor function related to reward and reinforcement.[6] Specifically, the core encodes new motor programs which facilitate the acquisition of a given reward in the future.[6]
So it sounds to me like the point of the nucleus accumbens is to learn “That was awesome! Let’s do it again!” or “That was bad! Let’s not do it again!”
Together, the nucleus accumbens + hippocampus can learn “4 sugar cubes in a few seconds is way better than 1 sugar cube right now.” Apart, the nucleus accumbens just says, “Sugar cubes! Sugar cubes! Sugar cubes!” and jams the lever that says “Sugar cube right now!” and there is nothing the hippocampus can do about it.
What distinguishes humans from all other animals? Our big brains, intellects, or impressive vocabularies?
It is our ability to acquire new knowledge and use it to plan and build complex, multi-generational societies.
Ants and bees live in complex societies, but they do not plan them. Monkeys, dolphins, squirrels, and even rats can plan for the future, but only humans plan and build cities.
Even the hunter-gatherer must plan for the future; a small tendril only a few inches high is noted during the wet season, then returned to in the dry, when it is little more than a withered stem, and the water-storing root beneath it harvested. The farmer facing winter stores up grain and wood; the city engineer plans a water and sewer system large enough to handle the next hundred years’ projected growth.
All of these activities require the interaction between the hippocampus and nucleus accumbens. The nucleus accumbens tells us that water is good, grain is tasty, fire is warm, and that clean drinking water and flushable toilets are awesome. The hippocampus reminds us that the dry season is coming, and so we should save–and remember–that root until we need it. It reminds us that we will be cold and hungry in winter if we don’t save our grain and spend a hours and hours chopping wood right now. It reminds us that not only is it good to organize the city so that everyone can have clean drinking water and flushable toilets right now, but that we should also make sure the system will keep working even as new people enter the city over time.
Disconnect these two, and your ability to plan goes down the drain. You eat all of your roots now, devour your seed corn, refuse to chop wood, and say, well, yes, running water would be nice, but that would require so much planning.
As I have mentioned before, I think Europeans (and probably a few other groups whose history I’m just not as familiar with and so I cannot comment on) IQ increased quite a bit in the past thousand years or so, and not just because the Catholic Church banned cousin marriage. During this time, manorialism became a big deal throughout Western Europe, and the people who exhibited good impulse control, worked hard, delayed gratification, and were able to accurately calculate the long-term effects of their actions tended to succeed (that is, have lots of children) and pass on their clever traits to their children. I suspect that selective pressure for “be a good manorial employee” was particularly strong in German, (and possibly Japan, now that I think about it,) resulting in the Germanic rigidity that makes them such good engineers.
Nothing in the manorial environment directly selected for engineering ability, higher math, large vocabularies, or really anything that we mean when we normally talk about IQ. But I do expect manorial life to select for those who could control their impulses and plan for the future, resulting in a run-away effect of increasingly clever people constructing increasingly complex societies in which people had to be increasingly good at dealing with complexity and planning to survive.
Ultimately, I see pure mathematical ability as a side effect of being able to accurately predict the effects of one’s actions and plan for the future (eg, “It will be an extra long winter, so I will need extra bushels of corn,”) and the ability to plan for the future as a side effect of being able to accurately represent the path of objects through space and remember lessons one has learned. All of these things, ultimately, are the same operations, just oriented differently through the space-time continuum.
Since your brain is, of course, built from the same DNA code as the rest of you, we would expect brain functions to have some amount of genetic heritablity, which is exactly what we find:
“A meta-analysis of twin, family and adoption studies was conducted to estimate the magnitude of genetic and environmental influences on impulsivity. The best fitting model for 41 key studies (58 independent samples from 14 month old infants to adults; N=27,147) included equal proportions of variance due to genetic (0.50) and non-shared environmental (0.50) influences, with genetic effects being both additive (0.38) and non-additive (0.12). Shared environmental effects were unimportant in explaining individual differences in impulsivity. Age, sex, and study design (twin vs. adoption) were all significant moderators of the magnitude of genetic and environmental influences on impulsivity. The relative contribution of genetic effects (broad sense heritability) and unique environmental effects were also found to be important throughout development from childhood to adulthood. Total genetic effects were found to be important for all ages, but appeared to be strongest in children. Analyses also demonstrated that genetic effects appeared to be stronger in males than in females.”
“Shared environmental effects” in a study like this means “the environment you and your siblings grew up in, like your household and school.” In this case, shared effects were unimportant–that means that parenting had no effect on the impulsivity of adopted children raised together in the same household. Non-shared environmental influences are basically random–you bumped your head as a kid, your mom drank during pregnancy, you were really hungry or pissed off during the test, etc., and maybe even cultural norms.
So your ability to plan for the future appears to be part genetic, and part random luck.
Important backstory: once upon a time, I made some offhand comments about mental health/psychiatric drugs that accidentally influenced someone else to go off their medication, which began a downward spiral that ended with them in the hospital after attempting suicide. Several years later, you could still see the words “I suck” scarred into their skin.
There were obviously some other nasty things that had nothing to do with me before the attempt, but regardless, there’s an important lesson: don’t say stupid ass things about mental health shit you know nothing about.
Also, don’t take mental health advice from people who don’t know what they’re talking about.
In my entirely inadequate defense, I was young and very dumb. David Walker is neither–and he is being published by irresponsible people who ought to know better.
To be clear: I am not a psychiatrist. I’m a dumb person on the internet with opinions. I am going to do my very damn best to counteract even dumber ideas, but for god’s sakes, if you have mental health issues, consult with someone with actual expertise in the field.
Also, you know few things bug me like watching science and logic be abused. So let’s get down to business:
This is one of those articles where SJW-logic plus sketchy research of the sort that I suspect originated with funding from guys trying to prove that all mental illnesses were caused by Galactic Overlord Xenu combine to make a not very satisfying article. I suppose it is petty to complain that the piece didn’t flow well, but still, it irked.
Basically, to sum: The Indian Health Service is evil because it uses standard psychiatry language and treatment–the exact same language and treatment as everyone else in the country is getting–instead of filling its manuals with a bunch of social-justice buzzwords like “colonization” and “historical trauma”. The article does not tell us how, exactly, inclusion of these buzzwords is supposed to actually change the practice of psychiatry–part of what made the piece frustrating on a technical level.
The author then makes a bunch of absolutist claims about standard depression treatment that range from the obviously false to matters of real debate in the field. Very few of his claims are based on what I’d call “settled science”–and if you’re going to make absolutist claims about medical related things, please, try to only say things that are actually settled.
The crux of Walker’s argument is a claim that anti-depressants actually kill people and decrease libido, so therefore the IHS is committing genocide by murdering Indians and preventing the births of new ones.
Ugh, when I put it like that, it sounds so obviously dumb.
Some actual quotes:
“In the last 40 years, certain English words and phrases have become more acceptable to indigenous scholars, thought leaders, and elders for describing shared Native experiences. They include genocide, cultural destruction, colonization, forced assimilation, loss of language, boarding school, termination, historical trauma and more general terms, such as racism, poverty, life expectancy, and educational barriers. There are many more.”
Historical trauma is horribly sad, of course, but as a cause for depression, I suspect it ranks pretty low. If historical trauma suffered by one’s ancestors results in continued difficulties several generations down the line, then the descendants of all traumatized groups ought to show similar effects. Most of Europe got pretty traumatized during WWII, but most of Europe seems to have recovered. Even the Jews, who practically invented modern psychiatry, use standard psychiatric models for talking about their depression without invoking the Holocaust. (Probably because depression rates are pretty low in Israel.)
But if you want to pursue this line of argument, you would need to show first that Indians are being diagnosed with depression (or other mental disorders) at a higher rate than the rest of the population, and then you would want to show that a large % of the excess are actually suffering some form of long-term effects of historical trauma. Third, you’d want to show that some alternative method of treatment is more effective than the current method.
To be fair, I am sure there are many ways that psychiatry sucks or could be improved. I just prefer good arguments on the subject.
“…the agency’s behavioral health manual mentions psychiatrist and psychiatric 23 times, therapy 18 times, pharmacotherapy, medication, drugs, and prescription 16 times, and the word treatment, a whopping 89 times. But it only uses the word violence once, and you won’t find a single mention of genocide, cultural destruction, colonization, historical trauma, etc.—nor even racism, poverty, life expectancy or educational barriers.”
It’s absolutely shocking that a government-issued psychiatry manual uses standard terms used in the psychiatry field like “medication” and “psychiatrist,” but doesn’t talk about particular left-wing political theories. It’s almost like the gov’t is trying to be responsible and follow accepted practice in the field or something. Of course, to SJWs, even medical care should be sacrificed before the altar of advancing the buzz-word agenda.
“This federal agency doesn’t acknowledge the reality of oppression within the lives of Native people.”
and… so? I know it sucks to deal with people who don’t acknowledge what you’re going through. My own approach to such people is to avoid them. If you don’t like what the IHS has to offer, then offer something better. Start your own organization offering support to people suffering from historical trauma. If your system is superior, you’ll not only benefit thousands (perhaps millions!) of people, and probably become highly respected and well-off in the process. Even if you, personally, don’t have the resources to start such a project, surely someone does.
If you can’t do that, you can at least avoid the IHS if you don’t like them. No one is forcing you to go to them.
BTW, in case you are wondering what the IHS is, here’s what Wikipedia has to say about them:
“The Indian Health Service (IHS) is an operating division (OPDIV) within the U.S. Department of Health and Human Services (HHS). IHS is responsible for providing medical and public health services to members of federally recognized Tribes and Alaska Natives. … its goal is to raise their health status to the highest possible level. … IHS currently provides health services to approximately 1.8 million of the 3.3 million American Indians and Alaska Natives who belong to more than 557 federally recognized tribes in 35 states. The agency’s annual budget is about $4.3 billion (as of December 2011).”
Sounds nefarious. So who runs this evil agency of health?
“The IHS employs approximately 2,700 nurses, 900 physicians, 400 engineers, 500 pharmacists, and 300 dentists, as well as other health professionals totaling more than 15,000 in all. The Indian Health Service is one of two federal agencies mandated to use Indian Preference in hiring. This law requires the agency to give preference hiring to qualified Indian applicants before considering non-Indian candidates for positions. … The Indian Health Service is headed by Dr. Yvette Roubideaux, M.D., M.P.H., a member of the Rosebud Sioux in South Dakota.”
So… the IHS, run by Indians, is trying to genocide other Indians by giving them mental health care?
And maybe I’m missing something, but don’t you think Dr. Roubideaux has some idea about the historical oppression of her own people?
Then we get into some anti-Pfizer/Zoloft business:
“For about a decade, IHS has set as one of its goals the detection of Native depression. [How evil of them!] This has been done by seeking to widen use of the Patient Health Questionnaire-9 (PHQ-9), which asks patients to describe to what degree they feel discouraged, downhearted, tired, low appetite, unable to sleep, slow-moving, easily distracted or as though life is no longer worth living.
The PHQ-9 was developed in the 1990s for drug behemoth Pfizer Corporation by prominent psychiatrist and contract researcher Robert Spitzer and several others. Although it owns the copyright, Pfizer offers the PHQ-9 for free use by primary health care providers. Why so generous? Perhaps because Pfizer is a top manufacturer of psychiatric medications, including its flagship antidepressant Zoloft® which earned the company as much as $2.9 billion annually before it went generic in 2006.”
I agree that it is reasonable to be skeptical of companies trying to sell you things, but the mere fact that a company is selling a product does not automatically render it evil. For example, the umbrella company makes money if you buy umbrellas, but that doesn’t make the umbrella company evil. Pfizer wants to promote its product, but also wants to make sure it gets prescribed properly.
” Even with the discovery that the drug can increase the risk of birth defects, 41 million prescriptions for Zoloft® were filled in 2013.”
Probably to people who weren’t pregnant.
“The DSM III-R created 110 new psychiatric labels, a number that had climbed by another 100 more by the time I started working at an IHS clinic in 2000.
Around that time, Pfizer, like many other big pharmaceutical corporations, was pouring millions of dollars into lavish marketing seminars disguised as “continuing education” on the uses of psychiatric medication for physicians and nurses with no mental health training.
… After this event, several primary care colleagues began touting their new expertise in mental health, and I was regularly advised that psychiatric medications were (obviously) the new “treatment of choice.” ”
Seriously, he’s claiming that psychiatric medications were the “new” “treatment of choice” in the year 2000? Zoloft was introduced in 1991. Prozac revolutionized the treatment of depression way back in 1987. Walker’s off by over a decade.
Now, as Scott Alexander says, beware the man of one study: you can visit Prozac and Zoloft’s Wikipedia pages yourself and read the debate about effectiveness.
Long story short, as I understand it: psychiatric medication is actually way cheaper than psychological therapy. If your primary care doctor can prescribe you Zoloft, then you can skip paying to see a psychiatrist all together.
Back in the day, before we had much in the way of medication for anything, the preferred method for helping people cope with their problems was telling them that they secretly wanted to fuck their mothers. This sounds dumb, but it beats the shit out of locking up mentally ill people in asylums where they tended to die hideously. Unfortunately, talking to people about their problems doesn’t seem to have worked all that well, though you could bill a ton for half hour session every week for forty years straight or until the patient ran out of money.
Modern anti-depressant medications appear to actually work for people with moderate to severe depression, though last time I checked, medication combined with therapy/support had the best outcomes–if anything, I suspect a lot of people could use a lot more support in their lives.
I should clarify: when I say “work,” I don’t mean they cure the depression. This has not been my personal observation of the depressed people I know, though maybe they do for some people. What they do seem to do is lessen the severity of the depression, allowing the depressed person to function.
” Since those days, affixing the depression label to Native experience has become big business. IHS depends a great deal upon this activity—follow-up “medication management” encounters allow the agency to pull considerable extra revenue from Medicaid. One part of the federal government supplements funding for the other. That’s one reason it might be in the best interest of IHS to diagnose and treat depression, rather than acknowledge the emotional and behavioral difficulties resulting from chronic, intergenerational oppression.”
It’s totally awful of the US gov’t to give free medication and health care to people. Medically responsible follow up to make sure the patients are responding properly to their medication and not having awful side effects is especially evil. The government should totally cut that out. From now on, lets cancel health services for the Native Peoples. That will totally end oppression.
Also, anyone who has ever paid an ounce of attention to anything the government does knows that expanding the IHS’s mandate to acknowledge the results of oppression would increase their funding, not decrease it.
Forgive me if it sounds a bit like Walker is actually trying to increase his pay.
“The most recent U.S. Public Health Service practice guidelines, which IHS primary care providers are required to use, states that “depression is a medical illness,” and in a nod to Big Pharma suppliers like Pfizer, serotonin-correcting medications (SSRIs) like Zoloft® “are frequently recommended as first-line antidepressant treatment options.” ”
My god, they use completely standard terminology and make factual statements about their field! Just like, IDK, all other mental healthcare providers in the country and throughout most of the developed world.
“This means IHS considers Native patients with a positive PHQ-9 screen to be mentally ill with depression.”
Dude, this means the that patients of EVERY RACE with a positive PHQ-9 are mentally ill with depression. Seriously, it’s not like Pfizer issues a separate screening guide for different races. If I visit a shrink, I’m going to get the exact same questionaires as you are.
Also, yes, depression is considered a mental illness, but Walker knows as well as I do that there’s a big difference between mentally ill with depression and, say, mentally ill with untreated schizophrenia.
” instance, the biomedical theory IHS is still promoting is obsolete. After more than 50 years of research, there’s no valid Western science to back up this theory of depression (or any other psychiatric disorder besides dementia and intoxication). There’s no chemical imbalance to correct.”
Slate Star Codex did a very long and thorough takedown of this particular claim: simply put, Walker is full of shit and should be ashamed of himself. The “chemical imbalance” model of depression, while an oversimplification, is actually pretty darn accurate, mostly because your brain is full of chemicals. As Scott Alexander points out:
“And this starts to get into the next important point I want to bring up, which is chemical imbalance is a really broad idea.
Like, some of these articles seem to want to contrast the “discredited” chemical imbalance theory with up-and-coming “more sophisticated” theories based on hippocampal neurogenesis and neuroinflammation. Well, I have bad news for you. Hippocampal neurogenesis is heavily regulated by brain-derived neutrophic factor, a chemical. Neuroinflammation is mediated by cytokines. Which are also chemicals. Do you think depression is caused by stress? The stress hormone cortisol is…a chemical. Do you think it’s entirely genetic? Genes code for proteins – chemicals again. Do you think it’s caused by poor diet? What exactly do you think food is made of?”
One of the most important things about the “chemical imbalance model” is that it helps the patient (again quoting Scott):
” People come in with depression, and they think it means they’re lazy, or they don’t have enough willpower, or they’re bad people. Or else they don’t think it, but their families do: why can’t she just pull herself up with her own bootstraps, make a bit of an effort? Or: we were good parents, we did everything right, why is he still doing this? Doesn’t he love us?
And I could say: “Well, it’s complicated, but basically in people who are genetically predisposed, some sort of precipitating factor, which can be anything from a disruption in circadian rhythm to a stressful event that increases levels of cortisol to anything that activates the immune system into a pro-inflammatory mode, is going to trigger a bunch of different changes along metabolic pathways that shifts all of them into a different attractor state. This can involve the release of cytokines which cause neuroinflammation which shifts the balance between kynurinins and serotonin in the tryptophan pathway, or a decrease in secretion of brain-derived neutrotrophic factor which inhibits hippocampal neurogenesis, and for some reason all of this also seems to elevate serotonin in the raphe nuclei but decrease it in the hippocampus, and probably other monoamines like dopamine and norepinephrine are involved as well, and of course we can’t forget the hypothalamopituitaryadrenocortical axis, although for all I know this is all total bunk and the real culprit is some other system that has downstream effects on all of these or just…”
Or I could say: “Fuck you, it’s a chemical imbalance.””
I’m going to quote Scott a little more:
“I’ve previously said we use talk of disease and biology to distinguish between things we can expect to respond to rational choice and social incentives and things that don’t. If I’m lying in bed because I’m sleepy, then yelling at me to get up will solve the problem, so we call sleepiness a natural state. If I’m lying in bed because I’m paralyzed, then yelling at me to get up won’t change anything, so we call paralysis a disease state. Talk of biology tells people to shut off their normal intuitive ways of modeling the world. Intuitively, if my son is refusing to go to work, it means I didn’t raise him very well and he doesn’t love me enough to help support the family. If I say “depression is a chemical imbalance”, well, that means that the problem is some sort of complicated science thing and I should stop using my “mirror neurons” and my social skills module to figure out where I went wrong or where he went wrong. …
“What “chemical imbalance” does for depression is try to force it down to this lower level, tell people to stop trying to use rational and emotional explanations for why their friend or family member is acting this way. It’s not a claim that nothing caused the chemical imbalance – maybe a recent breakup did – but if you try to use your normal social intuitions to determine why your friend or family member is behaving the way they are after the breakup, you’re going to get screwy results. …
“So this is my answer to the accusation that psychiatry erred in promoting the idea of a “chemical imbalance”. The idea that depression is a drop-dead simple serotonin deficiency was never taken seriously by mainstream psychiatry. The idea that depression was a complicated pattern of derangement in several different brain chemicals that may well be interacting with or downstream from other causes has always been taken seriously, and continues to be pretty plausible. Whatever depression is, it’s very likely it will involve chemicals in some way, and it’s useful to emphasize that fact in order to convince people to take depression seriously as something that is beyond the intuitively-modeled “free will” of the people suffering it. “Chemical imbalance” is probably no longer the best phrase for that because of the baggage it’s taken on, but the best phrase will probably be one that captures a lot of the same idea.”
Back to the article.
Walker states, ” Even psychiatrist Ronald Pies, editor-in-chief emeritus of Psychiatric Times, admitted “the ‘chemical imbalance’ notion was always a kind of urban legend.” ”
Oh, look, Dr. Pies was kind enough to actually comment on the article. You can scroll to the bottom to read his evisceration of Walker’s points–” …First, while I have indeed called the “chemical imbalance” explanation of mood disorders an “urban legend”—it was never a real theory propounded by well-informed psychiatrists—this in no way means that antidepressants are ineffective, harmful, or no better than “sugar pills.” The precise mechanism of action of antidepressants is not relevant to how effective they are, when the patient is properly diagnosed and carefully monitored. …
” Even Kirsch’s data (which have been roundly criticized if not discredited) found that antidepressants were more effective than the placebo condition for severe major depression. In a re-analysis of the United States Food and Drug Administration database studies previously analyzed by Kirsch et al, Vöhringer and Ghaemi concluded that antidepressant benefit is seen not only in severe depression but also in moderate (though not mild) depression. …
” While there is no clear evidence that antidepressants significantly reduce suicide rates, neither is there convincing evidence that they increase suicide rates.”
Here’s my own suspicion: depressed people on anti-depressants have highs and lows, just like everyone else, but because their medication can’t completely 100% cure them, sooner or later they end up feeling pretty damn shitty during a low point and start thinking about suicide or actually try it.
However, Pies notes that there are plenty of studies that have found that anti-depressants reduce a person’s overall risk of suicide.
In other words, Walker is, at best, completely misrepresenting the science to make his particular side sound like the established wisdom in the field when he is, in fact, on the minority side. That doesn’t guarantee that he’s wrong–it just means he is a liar.
And you know what I think about liars.
And you can probably imagine what I think about liars who lie in ways that might endanger the mental health of other people and cause them to commit suicide.
But wait, he keeps going:
“In an astonishing twist, researchers working with the World Health Organization (WHO) concluded that building more mental health services is a major factor in increasing the suicide rate. This finding may feel implausible, but it’s been repeated several times across large studies. WHO first studied suicide in relation to mental health systems in 100 countries in 2004, and then did so again in 2010, concluding that:
“[S]uicide rates… were increased in countries with mental health legislation, there was a significant positive correlation between suicide rates, and the percentage of the total health budget spent on mental health; and… suicide rates… were higher in countries with greater provision of mental health services, including the number of psychiatric beds, psychiatrists and psychiatric nurses, and the availability of training in mental health for primary care professionals.””
Do you know why I’ve been referring to Walker as “Walker” and not “Dr. Walker,” despite his apparent PhD? It’s because anyone who does not understand the difference between correlation and causation does not deserve a doctorate degree–or even a highschool degree–of any sort. Maybe people spend more on mental health because of suicides?
Oh, look, here’s the map he uses to support his claim:
Look at all those high-mental healthcare spending African countries!
I don’t know about you, but it looks to me like the former USSR, India/Bhutan/Nepal, Sub-Saharan Africa, Guyana, and Japan & the Koreas have the highest suicide rates in the world. Among these countries, all but Japan and S. Korea are either extremely poor and probably have little to no public spending on mental healthcare, or are former Soviet countries that are both less-developed than their lower-suicide brothers to the West and whatever is going on in them is probably related to them all being former Soviet countries, rather than their fabulous mental healthcare funding.
In other words, this map shows the opposite of what Walker claims it does.
Again, this doesn’t mean he’s necessarily wrong. It just means that the data on the subject is mixed and does not clearly support his case in the manner he claims.
” Despite what’s known about their significant limitations and scientific groundlessness, antidepressants are still valued by some people for creating “emotional numbness,” according to psychiatric researcher David Healy.”
So they don’t have any effects, but people keep using them for their… effects? Which is it? Do they work or not work?
And emotional numbness is a damn sight better than wanting to kill yourself. That Walker does not recognize this shows just how disconnected he is from the realities of life for many people struggling with depression.
“The side effect of antidepressants, however, in decreasing sexual energy (libido) is much stronger than this numbing effect—sexual disinterest or difficulty becoming aroused or achieving orgasm occurs in as many as 60 percent of consumers.”
Which, again, is still better than wanting to kill yourself. I hear death really puts a dent in your sex life.
However, I will note that this is a real side effect, and if you are taking anti-depressants and really can’t stand the mood kill (pardon the pun,) talk to your doctor, because there’s always the possibility that a different medication will treat your depression without affecting your libido.
“A formal report on IHS internal “Suicide Surveillance” data issued by Great Lakes Inter-Tribal Epidemiology Center states the suicide rate for all U.S. adults currently hovers at 10 for every 100,000 people, while for the Native patients IHS tracked, the rate was 17 per 100,000. This rate varied widely across the regions IHS serves—in California it was 5.5, while in Alaska, 38.5.”
Interesting statistics. I’m guessing the difference between Alaska and California holds true for whites, too–I suspect it’s the long, cold, dark winters.
“In 2013, the highest U.S. suicide rate (14.2) was among Whites and the second highest rate (11.7) was among American Indians and Alaska Natives (Figure 5). Much lower and roughly similar rates were found among Asians and Pacific Islanders (5.8), Blacks (5.4) and Hispanics (5.7).”
Their graph:
So much for that claim
Hey, do you know which American ethnic group also has a history of trauma and oppression? Besides the Jews. Black people.
If trauma and oppression leads to depression and suicide, then the black suicide rate ought to be closer to the Indian suicide rate, and the white rate ought to be down at the bottom.
I guess this is a point in favor of my “whites are depressive” theory, though.
Also, “In 2013, nine U.S. states, all in the West, had age-adjusted suicide rates in excess of 18: Montana (23.7), Alaska (23.1), Utah (21.4), Wyoming (21.4), New Mexico (20.3), Idaho (19.2), Nevada (18.2), Colorado (18.5), and South Dakota (18.2). Five locales had age-adjusted suicide rates lower than 9 per 100,000: District of Columbia (5.8), New Jersey (8.0), New York (8.1), Massachusetts (8.2), and Connecticut (8.7).”
States by suicide rate
Hrm, looks like there’s also a guns and impulsivity/violence correlation–I think the West was generally settled by more violent, impulsive whites who like the rough and tumble lifestyle, and where there are guns, people kill themselves with them.
I bet CA has some restrictive gun laws and some extensive mental health services.
You know the dark blue doesn’t look like it correllates with?
Healthcare funding.
Back to Walker. “Nearly one in four of these suicidal medication overdoses used psychiatric medications. The majority of these medications originated through the Indian Health Service itself and included amphetamine and stimulants, tricyclic and other antidepressants, sedatives, benzodiazepines, and barbiturates.”
Shockingly, people diagnosed with depression sometimes try to commit suicide.
Wait, aren’t amphetamines and “stimulants” used primarily for treating conditions like ADHD or to help people stay awake, not depression? And aren’t sedatives, benzos, and barbiturates used primarily for things like anxiety and pain relief? I don’t think these were the drugs Walker is looking for.
” What’s truly remarkable is that this is not the first time the mental health movement in Indian Country has helped to destroy Native people. Today’s making of a Mentally Ill Indian to “treat” is just a variation on an old idea, … The Native mental health system has been a tool of cultural genocide for over 175 years—seven generations. Long before there was this Mentally Ill Indian to treat, this movement was busy creating and perpetuating the Crazy Indian, the Dumb Indian, and the Drunken Indian.”
Walker’s depiction of the past may be accurate. His depiction of the present sounds like total nonsense.
” We must make peace with the fabled Firewater Myth, a false tale of heightened susceptibility to alcoholism and substances that even Native people sometimes tell themselves.”
The fuck? Of course Indians are more susceptible to alcoholism than non-Indians–everyone on earth whose ancestors haven’t had a long exposure to wheat tends to handle alcohol badly. Hell, the Scottich are more susceptible to alcoholism than, say, the Greeks:
Some people just have trouble with alcohol. Like the Russians.
Look, I don’t know if the IHS does a good job. Maybe its employes are poorly-trained, abrasive pharmaceutical shills who diagnose everyone who comes through their doors with depression and then prescribes them massive quantities of barbiturates.
And it could well be that the American psychiatric establishment is doing all sorts of things wrong.
But the things Walker cites in the article don’t indicate anything of the sort.
And for goodness sakes, if you’re depressed or have any other mental health problem, get advice from someone who actually knows what they’re talking about.