I remember when I first heard about epigenetics–the concept sounded awesome.

Now I cringe at the word.

To over simplify, “epigenetics” refers to biological processes that help turn on and off specific parts of DNA. For example, while every cell in your body (except sperm and eggs and I think blood cells?) have identical DNA, they obviously do different stuff. Eyeball cells and brain cells and muscle cells are all coded from the exact same DNA, but epigenetic factors make sure you don’t end up with muscles wiggling around in your eye sockets–or as an undifferentiated mass of slime.

If external environmental things can have epigenetic effects, I’d expect cancer to be a biggie, due to cell division and differentiation being epigenetic.

What epigenetics probably doesn’t do is everything people want it to do.

There’s a history, here, of people really wanting genetics to do things it doesn’t–to impose free will onto it.* Lamarck can be forgiven–we didn’t know about DNA back then. His theory was that an organism can pass on characteristics that it acquired during its lifetime to its offspring, thus driving evolution. The classic example given is that if a giraffe stretches its neck to reach leaves high up in the trees, its descendants will be born with long necks. It’s not a bad theory for a guy born in the mid 1700s, but science has advanced a bit since then.

The USSR put substantial resources into trying to make environmental effects show up in one’s descendants–including shooting anyone who disagreed.

Trofim Lysenko, a Soviet agronomist, claimed to be able to make wheat that would grow in winter–and pass on the trait to its offspring–by exposing the wheat seeds to cold. Of course, if that actually worked, Europeans would have developed cold-weather wheat thousands of years ago.

Lysenko was essentially the USSR’s version of an Affirmative Action hire:

“By the late 1920s, the Soviet political leaders had given their support to Lysenko. This support was a consequence, in part, of policies put in place by the Communist Party to rapidly promote members of the proletariat into leadership positions in agriculture, science and industry. Party officials were looking for promising candidates with backgrounds similar to Lysenko’s: born of a peasant family, without formal academic training or affiliations to the academic community.” (From the Wikipedia page on Lysenko)

In 1940, Lysenko became director of the USSR’s Academy of Science’s Institute of Genetics–a position he would hold until 1964. In 1948, scientific dissent from Lysenkoism was formally outlawed.

“From 1934 to 1940, under Lysenko’s admonitions and with Stalin’s approval, many geneticists were executed (including Isaak Agol, Solomon Levit, Grigorii Levitskii, Georgii Karpechenko and Georgii Nadson) or sent to labor camps. The famous Soviet geneticist Nikolai Vavilov was arrested in 1940 and died in prison in 1943. Hermann Joseph Muller (and his teachings about genetics) was criticized as a bourgeois, capitalist, imperialist, and promoting fascism so he left the USSR, to return to the USA via Republican Spain.

In 1948, genetics was officially declared “a bourgeois pseudoscience”; all geneticists were fired from their jobs (some were also arrested), and all genetic research was discontinued.”  (From the Wikipedia page on Lysenkoism.)

Alas, the Wikipedia does not tell me if anyone died from Lyskenkoism itself, say, after their crops failed, but I hear the USSR doesn’t have a great agricultural record.

Lysenko got kicked out in the 60s, but his theories have returned in the form of SJW-inspired claims of the magic of epigenetics to explain how any differences in average group performance or behavior is actually the fault of long-dead white people. Eg:

Trauma May be Woven into DNA of Native Americans, by Mary Pember

” The science of epigenetics, literally “above the gene,” proposes that we pass along more than DNA in our genes; it suggests that our genes can carry memories of trauma experienced by our ancestors and can influence how we react to trauma and stress.”

That’s a bold statement. At least Pember is making Walker’s argument for him.

Of course, that’s not actually what epigenetics says, but I’ll get to that in a bit.

“The Academy of Pediatrics reports that the way genes work in our bodies determines neuroendocrine structure and is strongly influenced by experience.”

That’s an interesting source. While I am sure the A of P knows its stuff, their specialty is medical care for small children, not genetics. Why did Pember not use an authority on genetics?

Note: when thinking about whether or not to trust an article’s science claims, consider the sources they use. If they don’t cite a source or cite an unusual, obscure, or less-than-authoritative source, then there’s a good chance they are lying or cherry-picking data to make a claim that is not actually backed up by the bulk of findings in the field. Notice that Pember does not provide a link to the A of P’s report on the subject, nor provide any other information so that an interested reader can go read the full report.

Wikipedia is actually a decent source on most subjects. Not perfect, of course, but it is usually decent. If I were writing science articles for pay, I would have subscriptions to major science journals and devote part of my day to reading them, as that would be my job. Since I’m just a dude with a blog who doesn’t get paid and so can’t afford a lot of journal memberships and has to do a real job for most of the day, I use a lot of Wikipedia. Sorry.

Also, I just want to note that the structure of this sentence is really wonky. “The way genes work in our bodies”? As opposed to how they work outside of our bodies? Do I have a bunch of DNA running around building neurotransmitters in the carpet or something? Written properly, this sentence would read, “According to the A of P, genes determine neuroenodcrine structures, in a process strongly influenced by experience.”

Pember continues:

“Trauma experienced by earlier generations can influence the structure of our genes, making them more likely to “switch on” negative responses to stress and trauma.”

Pember does not clarify whether she is continuing to cite from the A of P, or just giving her own opinions. The structure of the paragraph implies that this statement comes from the A of P, but again, no link to the original source is given, so I am hard pressed to figure out which it is.

At any rate, this doesn’t sound like something the A of P would say, because it is obviously and blatantly incorrect. Trauma *may* affect the structure of one’s epigenetics, but not the structure of one’s genes. The difference is rather large. Viruses and ionizing radiation can change the structure of your DNA, but “trauma” won’t.

” The now famous 1998 ACES study conducted by the Centers for Disease Control (CDC) and Kaiser Permanente showed that such adverse experiences could contribute to mental and physical illness.”

Um, no shit? Is this one of those cases of paying smart people tons of money to tell us grass is green and sky is blue? Also, that’s a really funny definition of “famous.” Looks like the author is trying to claim her sources have more authority than they actually do.

“Folks in Indian country wonder what took science so long to catch up with traditional Native knowledge.”

I’m pretty sure practically everyone already knew this.

“According to Bitsoi, epigenetics is beginning to uncover scientific proof that intergenerational trauma is real. Historical trauma, therefore, can be seen as a contributing cause in the development of illnesses such as PTSD, depression and type 2 diabetes.”

Okay, do you know what epigenetics actually shows?

The experiment Wikipedia cites is of male mice who were trained to fear a certain smell by giving them small electric shocks when they smelled the smell. The children of these mice, conceived after the foot-shocking was finished, startled in response to the smell–they had inherited their father’s epigenetic markers that enhanced their response to that specific smell.

It’s a big jump from “mice startle at smells” to “causes PTSD.” This is a big jump in particular because of two things:

1. Your epigenetics change all the time. It’s like learning. You don’t just learn one thing and then have this one thing you’ve learned stuck in your head for the entire rest of your life, unable to learn anything new. Your epigenetics change in response to life circumstances throughout your entire life.

Eg, (from the Wikipedia):

“One of the first high-throughput studies of epigenetic differences between monozygotic twins focused in comparing global and locus-specific changes in DNA methylation and histone modifications in a sample of 40 monozygotic twin pairs. In this case, only healthy twin pairs were studied, but a wide range of ages was represented, between 3 and 74 years. One of the major conclusions from this study was that there is an age-dependent accumulation of epigenetic differences between the two siblings of twin pairs. This accumulation suggests the existence of epigenetic “drift”.

In other words, when identical twins are babies, they have very similar epigenetics. As they get older, their epigenetics get more and more different because they have had different experiences out in the world, and their experiences have changed their epigenetics. Your epigenetics change as you age.

Which means that the chances of the exact same epigenetics being passed down from father to child over many generations are essentially zilch.

2. Tons of populations have experienced trauma. If you go back far enough in anyone’s family tree, you can probably find someone who has experienced trauma. My grandparents went through trauma during the Great Depression and WWII. My biological parents were both traumatized as children. So have millions, perhaps billions of other people on this earth. If trauma gets encoded in people’s DNA (or their epigenetics,) then it’s encoded in virtually every person on the face of this planet.

Type 2 Diabetes, Depression, and PTSD are not evenly distributed across the planet. Hell, they aren’t even common in all peoples who have had recent, large oppression events. African Americans have low levels of depression and commit suicide at much lower rates than whites–have white Americans suffered more oppression than black Americans? Whites commit suicide at a higher rate than Indians–have the whites suffered more historical trauma? On a global scale, Israel has a relatively low suicide rate–lower than India’s. Did India recently experience some tragedy worse than the Holocaust? (See yesterday’s post for all stats.)

Type 2 Diabetes reaches its global maximum in Saudia Arabia, Oman, and the UAE, which as far as I know have not been particularly traumatized lately, and is much lower among Holocaust descendants in nearby Israel:

From a BBC article on obesity
From a BBC article on obesity

It’s also very low in Sub-Saharan Africa, even though all of the stuff that causes “intergenerational trauma” probably happened there in spades. Have Americans been traumatized more than the Congolese?

This map doesn’t make any sense from the POV of historical trauma. It makes perfect sense if you know who’s eating fatty Waestern diets they aren’t adapted to. Saudia Arabia and the UAE are fucking rich (I bet Oman is, too,) and their population of nomadic goat herders has settled down to eat all the cake they want. The former nomadic lifestyle did not equip them to digest lots of refined grains, which are hard to grow in the desert. Most of Africa (and Yemen) is too poor to gorge on enough food to get Type-2 Diabetes; China and Mongolia have stuck to their traditional diets, to which they are well adapted. Mexicans are probably not adapted to wheat. The former Soviet countries have probably adopted Western diets. Etc., etc.

Why bring up Type-2 Diabetes at all? Well, it appears Indians get Type-2 Diabetes at about the same rate as Mexicans, [Note: PDF] probably for the exact same reasons: their ancestors didn’t eat a lot of wheat, refined sugar, and refined fats, and so they aren’t adapted to the Western diet. (FWIW, White Americans aren’t all that well adapted to the Western Diet, either.)

Everybody who isn’t adapted to the Western Diet gets high rates of diabetes and obesity if they start eating it, whether they had historical trauma or not. We don’t need epigenetic trauma to explain this.

“The researchers found that Native peoples have high rates of ACE’s and health problems such as posttraumatic stress, depression and substance abuse, diabetes all linked with methylation of genes regulating the body’s response to stress. “The persistence of stress associated with discrimination and historical trauma converges to add immeasurably to these challenges,” the researchers wrote.

Since there is a dearth of studies examining these findings, the researchers stated they were unable to conclude a direct cause between epigenetics and high rates of certain diseases among Native Americans.”

There’s a dearth of studies due to it being really immoral to purposefully traumatize humans and then breed them to see if their kids come out fucked up. Luckily for us, (or not luckily, depending on how you look at it,) however, humans have been traumatizing each other for ages, so we can just look at actually traumatized populations. There does seem to be an effect down the road for people whose parents or grandparents went through famines, but, “the effects could last for two generations.”

As horrible as the treatment of the Indians has been, I am pretty sure they didn’t go through a famine two generations ago on the order of what happened when the Nazis occupied the Netherlands and 18-22,000 people starved.

In other words, there’s no evidence of any long-term epigenetic effects large enough to create the effects they’re claiming. As I’ve said, if epigenetics actually acted like that, virtually everyone on earth would show the effects.

The reason they don’t is because epigenetic effects are relatively short-lived. Your epigenetics get re-written throughout your lifetime.

” Researchers such as Shannon Sullivan, professor of philosophy at UNC Charlotte, suggests in her article “Inheriting Racist Disparities in Health: Epigenetics and the Transgenerational Effects of White Racism,” that the science has faint echoes of eugenics, the social movement claiming to improve genetic features of humans through selective breeding and sterilization.”

I’m glad the philosophers are weighing in on science. I am sure philosophers know all about genetics. Hey, remember what I said about citing sources that are actual authorities on the subject at hand? My cousin Bob has all sorts of things to say about epigenetics, but that doesn’t mean his opinions are worth sharing.

The article ends:

“Isolating and nurturing a resilience gene may well be on the horizon.”

How do you nurture a gene?


There are things that epigenetics do. Just not the things people want them to do.


How Much anti-Psych Research is Funded by Guys who Think all Mental Illness is Caused by Dead Aliens?

And how much is just idiots?

How the US Mental Health System Makes Natives Sick and Suicidal,” by David Walker.

Important backstory: once upon a time, I made some offhand comments about mental health/psychiatric drugs that accidentally influenced someone else to go off their medication, which began a downward spiral that  ended with them in the hospital after attempting suicide. Several years later, you could still see the words “I suck” scarred into their skin.

There were obviously some other nasty things that had nothing to do with me before the attempt, but regardless, there’s an important lesson: don’t say stupid ass things about mental health shit you know nothing about.

Also, don’t take mental health advice from people who don’t know what they’re talking about.

In my entirely inadequate defense, I was young and very dumb. David Walker is neither–and he is being published by irresponsible people who ought to know better.

To be clear: I am not a psychiatrist. I’m a dumb person on the internet with opinions. I am going to do my very damn best to counteract even dumber ideas, but for god’s sakes, if you have mental health issues, consult with someone with actual expertise in the field.

Also, you know few things bug me like watching science and logic be abused. So let’s get down to business:

This is one of those articles where SJW-logic plus sketchy research of the sort that I suspect originated with funding from guys trying to prove that all mental illnesses were caused by Galactic Overlord Xenu combine to make a not very satisfying article. I suppose it is petty to complain that the piece didn’t flow well, but still, it irked.

Basically, to sum: The Indian Health Service is evil because it uses standard psychiatry language and treatment–the exact same language and treatment as everyone else in the country is getting–instead of filling its manuals with a bunch of social-justice buzzwords like “colonization” and “historical trauma”. The article does not tell us how, exactly, inclusion of these buzzwords is supposed to actually change the practice of psychiatry–part of what made the piece frustrating on a technical level.

The author then makes a bunch of absolutist claims about standard depression treatment that range from the obviously false to matters of real debate in the field. Very few of his claims are based on what I’d call “settled science”–and if you’re going to make absolutist claims about medical related things, please, try to only say things that are actually settled.

The crux of Walker’s argument is a claim that anti-depressants actually kill people and decrease libido, so therefore the IHS is committing genocide by murdering Indians and preventing the births of new ones.

Ugh, when I put it like that, it sounds so obviously dumb.

Some actual quotes:

“In the last 40 years, certain English words and phrases have become more acceptable to indigenous scholars, thought leaders, and elders for describing shared Native experiences. They include genocide, cultural destruction, colonization, forced assimilation, loss of language, boarding school, termination, historical trauma and more general terms, such as racism, poverty, life expectancy, and educational barriers. There are many more.”

Historical trauma is horribly sad, of course, but as a cause for depression, I suspect it ranks pretty low. If historical trauma suffered by one’s ancestors results in continued difficulties several generations down the line, then the descendants of all traumatized groups ought to show similar effects. Most of Europe got pretty traumatized during WWII, but most of Europe seems to have recovered. Even the Jews, who practically invented modern psychiatry, use standard psychiatric models for talking about their depression without invoking the Holocaust. (Probably because depression rates are pretty low in Israel.)

But if you want to pursue this line of argument, you would need to show first that Indians are being diagnosed with depression (or other mental disorders) at a higher rate than the rest of the population, and then you would want to show that a large % of the excess are actually suffering some form of long-term effects of historical trauma. Third, you’d want to show that some alternative method of treatment is more effective than the current method.

To be fair, I am sure there are many ways that psychiatry sucks or could be improved. I just prefer good arguments on the subject.

“…the agency’s behavioral health manual mentions psychiatrist and psychiatric 23 times, therapy 18 times, pharmacotherapy, medication, drugs, and prescription 16 times, and the word treatment, a whopping 89 times. But it only uses the word violence once, and you won’t find a single mention of genocide, cultural destruction, colonization, historical trauma, etc.—nor even racism, poverty, life expectancy or educational barriers.

It’s absolutely shocking that a government-issued psychiatry manual uses standard terms used in the psychiatry field like “medication” and “psychiatrist,” but doesn’t talk about particular left-wing political theories. It’s almost like the gov’t is trying to be responsible and follow accepted practice in the field or something. Of course, to SJWs, even medical care should be sacrificed before the altar of advancing the buzz-word agenda.

“This federal agency doesn’t acknowledge the reality of oppression within the lives of Native people.”

and… so? I know it sucks to deal with people who don’t acknowledge what you’re going through. My own approach to such people is to avoid them. If you don’t like what the IHS has to offer, then offer something better. Start your own organization offering support to people suffering from historical trauma. If your system is superior, you’ll not only benefit thousands (perhaps millions!) of people, and probably become highly respected and well-off in the process. Even if you, personally, don’t have the resources to start such a project, surely someone does.

If you can’t do that, you can at least avoid the IHS if you don’t like them. No one is forcing you to go to them.

BTW, in case you are wondering what the IHS is, here’s what Wikipedia has to say about them:

“The Indian Health Service (IHS) is an operating division (OPDIV) within the U.S. Department of Health and Human Services (HHS). IHS is responsible for providing medical and public health services to members of federally recognized Tribes and Alaska Natives. … its goal is to raise their health status to the highest possible level. … IHS currently provides health services to approximately 1.8 million of the 3.3 million American Indians and Alaska Natives who belong to more than 557 federally recognized tribes in 35 states. The agency’s annual budget is about $4.3 billion (as of December 2011).”

Sounds nefarious. So who runs this evil agency of health?

“The IHS employs approximately 2,700 nurses, 900 physicians, 400 engineers, 500 pharmacists, and 300 dentists, as well as other health professionals totaling more than 15,000 in all. The Indian Health Service is one of two federal agencies mandated to use Indian Preference in hiring. This law requires the agency to give preference hiring to qualified Indian applicants before considering non-Indian candidates for positions. … The Indian Health Service is headed by Dr. Yvette Roubideaux, M.D., M.P.H., a member of the Rosebud Sioux in South Dakota.”

So… the IHS, run by Indians, is trying to genocide other Indians by giving them mental health care?

And maybe I’m missing something, but don’t you think Dr. Roubideaux has some idea about the historical oppression of her own people?

Then we get into some anti-Pfizer/Zoloft business:

“For about a decade, IHS has set as one of its goals the detection of Native depression. [How evil of them!] This has been done by seeking to widen use of the Patient Health Questionnaire-9 (PHQ-9), which asks patients to describe to what degree they feel discouraged, downhearted, tired, low appetite, unable to sleep, slow-moving, easily distracted or as though life is no longer worth living.

The PHQ-9 was developed in the 1990s for drug behemoth Pfizer Corporation by prominent psychiatrist and contract researcher Robert Spitzer and several others. Although it owns the copyright, Pfizer offers the PHQ-9 for free use by primary health care providers. Why so generous? Perhaps because Pfizer is a top manufacturer of psychiatric medications, including its flagship antidepressant Zoloft® which earned the company as much as $2.9 billion annually before it went generic in 2006.”

I agree that it is reasonable to be skeptical of companies trying to sell you things, but the mere fact that a company is selling a product does not automatically render it evil. For example, the umbrella company makes money if you buy umbrellas, but that doesn’t make the umbrella company evil. Pfizer wants to promote its product, but also wants to make sure it gets prescribed properly.

” Even with the discovery that the drug can increase the risk of birth defects, 41 million prescriptions for Zoloft® were filled in 2013.”

Probably to people who weren’t pregnant.

“The DSM III-R created 110 new psychiatric labels, a number that had climbed by another 100 more by the time I started working at an IHS clinic in 2000.

Around that time, Pfizer, like many other big pharmaceutical corporations, was pouring millions of dollars into lavish marketing seminars disguised as “continuing education” on the uses of psychiatric medication for physicians and nurses with no mental health training.

… After this event, several primary care colleagues began touting their new expertise in mental health, and I was regularly advised that psychiatric medications were (obviously) the new “treatment of choice.” ”

Seriously, he’s claiming that psychiatric medications were the “new” “treatment of choice” in the year 2000? Zoloft was introduced in 1991. Prozac revolutionized the treatment of depression way back in 1987. Walker’s off by over a decade.

Now, as Scott Alexander says, beware the man of one study: you can visit Prozac and Zoloft’s Wikipedia pages yourself and read the debate about effectiveness.

Long story short, as I understand it: psychiatric medication is actually way cheaper than psychological therapy. If your primary care doctor can prescribe you Zoloft, then you can skip paying to see a psychiatrist all together.

Back in the day, before we had much in the way of medication for anything, the preferred method for helping people cope with their problems was telling them that they secretly wanted to fuck their mothers. This sounds dumb, but it beats the shit out of locking up mentally ill people in asylums where they tended to die hideously. Unfortunately, talking to people about their problems doesn’t seem to have worked all that well, though you could bill a ton for half hour session every week for forty years straight or until the patient ran out of money.

Modern anti-depressant medications appear to actually work for people with moderate to severe depression, though last time I checked, medication combined with therapy/support had the best outcomes–if anything, I suspect a lot of people could use a lot more support in their lives.

I should clarify: when I say “work,” I don’t mean they cure the depression. This has not been my personal observation of the depressed people I know, though maybe they do for some people. What they do seem to do is lessen the severity of the depression, allowing the depressed person to function.

” Since those days, affixing the depression label to Native experience has become big business. IHS depends a great deal upon this activity—follow-up “medication management” encounters allow the agency to pull considerable extra revenue from Medicaid. One part of the federal government supplements funding for the other. That’s one reason it might be in the best interest of IHS to diagnose and treat depression, rather than acknowledge the emotional and behavioral difficulties resulting from chronic, intergenerational oppression.”

It’s totally awful of the US gov’t to give free medication and health care to people. Medically responsible follow up to make sure the patients are responding properly to their medication and not having awful side effects is especially evil. The government should totally cut that out. From now on, lets cancel health services for the Native Peoples. That will totally end oppression.

Also, anyone who has ever paid an ounce of attention to anything the government does knows that expanding the IHS’s mandate to acknowledge the results of oppression would increase their funding, not decrease it.

Forgive me if it sounds a bit like Walker is actually trying to increase his pay.

“The most recent U.S. Public Health Service practice guidelines, which IHS primary care providers are required to use, states that “depression is a medical illness,” and in a nod to Big Pharma suppliers like Pfizer, serotonin-correcting medications (SSRIs) like Zoloft® “are frequently recommended as first-line antidepressant treatment options.” ”

My god, they use completely standard terminology and make factual statements about their field! Just like, IDK, all other mental healthcare providers in the country and throughout most of the developed world.

“This means IHS considers Native patients with a positive PHQ-9 screen to be mentally ill with depression.”

Dude, this means the that patients of EVERY RACE with a positive PHQ-9 are mentally ill with depression. Seriously, it’s not like Pfizer issues a separate screening guide for different races. If I visit a shrink, I’m going to get the exact same questionaires as you are.

Also, yes, depression is considered a mental illness, but Walker knows as well as I do that there’s a big difference between mentally ill with depression and, say, mentally ill with untreated schizophrenia.

” instance, the biomedical theory IHS is still promoting is obsolete. After more than 50 years of research, there’s no valid Western science to back up this theory of depression (or any other psychiatric disorder besides dementia and intoxication). There’s no chemical imbalance to correct.”

Slate Star Codex did a very long and thorough takedown of this particular claim: simply put, Walker is full of shit and should be ashamed of himself. The “chemical imbalance” model of depression, while an oversimplification, is actually pretty darn accurate, mostly because your brain is full of chemicals. As Scott Alexander points out:

“And this starts to get into the next important point I want to bring up, which is chemical imbalance is a really broad idea.

Like, some of these articles seem to want to contrast the “discredited” chemical imbalance theory with up-and-coming “more sophisticated” theories based on hippocampal neurogenesis and neuroinflammation. Well, I have bad news for you. Hippocampal neurogenesis is heavily regulated by brain-derived neutrophic factor, a chemical. Neuroinflammation is mediated by cytokines. Which are also chemicals. Do you think depression is caused by stress? The stress hormone cortisol is…a chemical. Do you think it’s entirely genetic? Genes code for proteins – chemicals again. Do you think it’s caused by poor diet? What exactly do you think food is made of?

One of the most important things about the “chemical imbalance model” is that it helps the patient (again quoting Scott):

” People come in with depression, and they think it means they’re lazy, or they don’t have enough willpower, or they’re bad people. Or else they don’t think it, but their families do: why can’t she just pull herself up with her own bootstraps, make a bit of an effort? Or: we were good parents, we did everything right, why is he still doing this? Doesn’t he love us?

And I could say: “Well, it’s complicated, but basically in people who are genetically predisposed, some sort of precipitating factor, which can be anything from a disruption in circadian rhythm to a stressful event that increases levels of cortisol to anything that activates the immune system into a pro-inflammatory mode, is going to trigger a bunch of different changes along metabolic pathways that shifts all of them into a different attractor state. This can involve the release of cytokines which cause neuroinflammation which shifts the balance between kynurinins and serotonin in the tryptophan pathway, or a decrease in secretion of brain-derived neutrotrophic factor which inhibits hippocampal neurogenesis, and for some reason all of this also seems to elevate serotonin in the raphe nuclei but decrease it in the hippocampus, and probably other monoamines like dopamine and norepinephrine are involved as well, and of course we can’t forget the hypothalamopituitaryadrenocortical axis, although for all I know this is all total bunk and the real culprit is some other system that has downstream effects on all of these or just…”

Or I could say: “Fuck you, it’s a chemical imbalance.””

I’m going to quote Scott a little more:

“I’ve previously said we use talk of disease and biology to distinguish between things we can expect to respond to rational choice and social incentives and things that don’t. If I’m lying in bed because I’m sleepy, then yelling at me to get up will solve the problem, so we call sleepiness a natural state. If I’m lying in bed because I’m paralyzed, then yelling at me to get up won’t change anything, so we call paralysis a disease state. Talk of biology tells people to shut off their normal intuitive ways of modeling the world. Intuitively, if my son is refusing to go to work, it means I didn’t raise him very well and he doesn’t love me enough to help support the family. If I say “depression is a chemical imbalance”, well, that means that the problem is some sort of complicated science thing and I should stop using my “mirror neurons” and my social skills module to figure out where I went wrong or where he went wrong. …

“What “chemical imbalance” does for depression is try to force it down to this lower level, tell people to stop trying to use rational and emotional explanations for why their friend or family member is acting this way. It’s not a claim that nothing caused the chemical imbalance – maybe a recent breakup did – but if you try to use your normal social intuitions to determine why your friend or family member is behaving the way they are after the breakup, you’re going to get screwy results. …

“So this is my answer to the accusation that psychiatry erred in promoting the idea of a “chemical imbalance”. The idea that depression is a drop-dead simple serotonin deficiency was never taken seriously by mainstream psychiatry. The idea that depression was a complicated pattern of derangement in several different brain chemicals that may well be interacting with or downstream from other causes has always been taken seriously, and continues to be pretty plausible. Whatever depression is, it’s very likely it will involve chemicals in some way, and it’s useful to emphasize that fact in order to convince people to take depression seriously as something that is beyond the intuitively-modeled “free will” of the people suffering it. “Chemical imbalance” is probably no longer the best phrase for that because of the baggage it’s taken on, but the best phrase will probably be one that captures a lot of the same idea.”

Back to the article.

Walker states, ” Even psychiatrist Ronald Pies, editor-in-chief emeritus of Psychiatric Times, admitted “the ‘chemical imbalance’ notion was always a kind of urban legend.” ”

Oh, look, Dr. Pies was kind enough to actually comment on the article. You can scroll to the bottom to read his evisceration of Walker’s points–” …First, while I have indeed called the “chemical imbalance” explanation of mood disorders an “urban legend”—it was never a real theory propounded by well-informed psychiatrists—this in no way means that antidepressants are ineffective, harmful, or no better than “sugar pills.” The precise mechanism of action of antidepressants is not relevant to how effective they are, when the patient is properly diagnosed and carefully monitored. …

” Even Kirsch’s data (which have been roundly criticized if not discredited) found that antidepressants were more effective than the placebo condition for severe major depression. In a re-analysis of the United States Food and Drug Administration database studies previously analyzed by Kirsch et al, Vöhringer and Ghaemi concluded that antidepressant benefit is seen not only in severe depression but also in moderate (though not mild) depression. …

” While there is no clear evidence that antidepressants significantly reduce suicide rates, neither is there convincing evidence that they increase suicide rates.”

Here’s my own suspicion: depressed people on anti-depressants have highs and lows, just like everyone else, but because their medication can’t completely 100% cure them, sooner or later they end up feeling pretty damn shitty during a low point and start thinking about suicide or actually try it.

However, Pies notes that there are plenty of studies that have found that anti-depressants reduce a person’s overall risk of suicide.

In other words, Walker is, at best, completely misrepresenting the science to make his particular side sound like the established wisdom in the field when he is, in fact, on the minority side. That doesn’t guarantee that he’s wrong–it just means he is a liar.

And you know what I think about liars.

And you can probably imagine what I think about liars who lie in ways that might endanger the mental health of other people and cause them to commit suicide.

But wait, he keeps going:

“In an astonishing twist, researchers working with the World Health Organization (WHO) concluded that building more mental health services is a major factor in increasing the suicide rate. This finding may feel implausible, but it’s been repeated several times across large studies. WHO first studied suicide in relation to mental health systems in 100 countries in 2004, and then did so again in 2010, concluding that:

“[S]uicide rates… were increased in countries with mental health legislation, there was a significant positive correlation between suicide rates, and the percentage of the total health budget spent on mental health; and… suicide rates… were higher in countries with greater provision of mental health services, including the number of psychiatric beds, psychiatrists and psychiatric nurses, and the availability of training in mental health for primary care professionals.””

Do you know why I’ve been referring to Walker as “Walker” and not “Dr. Walker,” despite his apparent PhD? It’s because anyone who does not understand the difference between correlation and causation does not deserve a doctorate degree–or even a highschool degree–of any sort. Maybe people spend more on mental health because of suicides?

Oh, look, here’s the map he uses to support his claim:

This map has been confounding my attempt to claim that Finno-Scandians like death metal because they're depressives
Look at all those high-mental healthcare spending African countries!

I don’t know about you, but it looks to me like the former USSR, India/Bhutan/Nepal, Sub-Saharan Africa, Guyana, and Japan & the Koreas have the highest suicide rates in the world. Among these countries, all but Japan and S. Korea are either extremely poor and probably have little to no public spending on mental healthcare, or are former Soviet countries that are both less-developed than their lower-suicide brothers to the West and whatever is going on in them is probably related to them all being former Soviet countries, rather than their fabulous mental healthcare funding.

In other words, this map shows the opposite of what Walker claims it does.

Again, this doesn’t mean he’s necessarily wrong. It just means that the data on the subject is mixed and does not clearly support his case in the manner he claims.

” Despite what’s known about their significant limitations and scientific groundlessness, antidepressants are still valued by some people for creating “emotional numbness,” according to psychiatric researcher David Healy.”

So they don’t have any effects, but people keep using them for their… effects? Which is it? Do they work or not work?

And emotional numbness is a damn sight better than wanting to kill yourself. That Walker does not recognize this shows just how disconnected he is from the realities of life for many people struggling with depression.

“The side effect of antidepressants, however, in decreasing sexual energy (libido) is much stronger than this numbing effect—sexual disinterest or difficulty becoming aroused or achieving orgasm occurs in as many as 60 percent of consumers.”

Which, again, is still better than wanting to kill yourself. I hear death really puts a dent in your sex life.

However, I will note that this is a real side effect, and if you are taking anti-depressants and really can’t stand the mood kill (pardon the pun,) talk to your doctor, because there’s always the possibility that a different medication will treat your depression without affecting your libido.

“A formal report on IHS internal “Suicide Surveillance” data issued by Great Lakes Inter-Tribal Epidemiology Center states the suicide rate for all U.S. adults currently hovers at 10 for every 100,000 people, while for the Native patients IHS tracked, the rate was 17 per 100,000. This rate varied widely across the regions IHS serves—in California it was 5.5, while in Alaska, 38.5.”

Interesting statistics. I’m guessing the difference between Alaska and California holds true for whites, too–I suspect it’s the long, cold, dark winters.

According to the American Foundation for Suicide Prevention,

“In 2013, the highest U.S. suicide rate (14.2) was among Whites and the second highest rate (11.7) was among American Indians and Alaska Natives (Figure 5). Much lower and roughly similar rates were found among Asians and Pacific Islanders (5.8), Blacks (5.4) and Hispanics (5.7).”

Their graph:

Actually, the interesting thing is just how non-suicidal blacks seem to be.
So much for that claim

Hey, do you know which American ethnic group also has a history of trauma and oppression? Besides the Jews. Black people.

If trauma and oppression leads to depression and suicide, then the black suicide rate ought to be closer to the Indian suicide rate, and the white rate ought to be down at the bottom.

I guess this is a point in favor of my “whites are depressive” theory, though.

Also, “In 2013, nine U.S. states, all in the West, had age-adjusted suicide rates in excess of 18: Montana (23.7), Alaska (23.1), Utah (21.4), Wyoming (21.4), New Mexico (20.3), Idaho (19.2), Nevada (18.2), Colorado (18.5), and South Dakota (18.2). Five locales had age-adjusted suicide rates lower than 9 per 100,000: District of Columbia (5.8), New Jersey (8.0), New York (8.1), Massachusetts (8.2), and Connecticut (8.7).”

I'd like to see thi map compared to a map of white violence rates
States by suicide rate

Hrm, looks like there’s also a guns and impulsivity/violence correlation–I think the West was generally settled by more violent, impulsive whites who like the rough and tumble lifestyle, and where there are guns, people kill themselves with them.

I bet CA has some restrictive gun laws and some extensive mental health services.

You know the dark blue doesn’t look like it correllates with?

Healthcare funding.

Back to Walker. “Nearly one in four of these suicidal medication overdoses used psychiatric medications. The majority of these medications originated through the Indian Health Service itself and included amphetamine and stimulants, tricyclic and other antidepressants, sedatives, benzodiazepines, and barbiturates.”

Shockingly, people diagnosed with depression sometimes try to commit suicide.

Wait, aren’t amphetamines and “stimulants” used primarily for treating conditions like ADHD or to help people stay awake, not depression? And aren’t sedatives, benzos, and barbiturates used primarily for things like anxiety and pain relief? I don’t think these were the drugs Walker is looking for.

” What’s truly remarkable is that this is not the first time the mental health movement in Indian Country has helped to destroy Native people. Today’s making of a Mentally Ill Indian to “treat” is just a variation on an old idea, … The Native mental health system has been a tool of cultural genocide for over 175 years—seven generations. Long before there was this Mentally Ill Indian to treat, this movement was busy creating and perpetuating the Crazy Indian, the Dumb Indian, and the Drunken Indian.”

Walker’s depiction of the past may be accurate. His depiction of the present sounds like total nonsense.

” We must make peace with the fabled Firewater Myth, a false tale of heightened susceptibility to alcoholism and substances that even Native people sometimes tell themselves.”

The fuck? Of course Indians are more susceptible to alcoholism than non-Indians–everyone on earth whose ancestors haven’t had a long exposure to wheat tends to handle alcohol badly. Hell, the Scottich are more susceptible to alcoholism than, say, the Greeks:


Some people just have trouble with alcohol. Like the Russians.


Look, I don’t know if the IHS does a good job. Maybe its employes are poorly-trained, abrasive pharmaceutical shills who diagnose everyone who comes through their doors with depression and then prescribes them massive quantities of barbiturates.

And it could well be that the American psychiatric establishment is doing all sorts of things wrong.

But the things Walker cites in the article don’t indicate anything of the sort.

And for goodness sakes, if you’re depressed or have any other mental health problem, get advice from someone who actually knows what they’re talking about.

Conservatives and Liberals Assume Everyone Else is Like Themselves

Conservatives are well-known for their “pull yourself up by your bootlaces” ideology, and liberals have made whole careers out of claiming that conservatives are hypocrites who got a hand up  in their own lives, but want to deny that same help to everyone else. Conservatives, of course, claim that they got where they did by dint of sheer hard work and willpower.

So which is it? Are conservatives jut liars who want to keep all of the goodies for themselves? Or do they practice what they preach? And what about liberals? How hard are they trying to get ahead?

A recent article in the LA Times describes a study on willpower/self control:

“In a series of three studies with more than 300 participants, the authors found that people who identify as conservative perform better on tests of self-control than those who identify as liberal regardless of race, socioeconomic status and gender.”

What about age? (I suppose we can assume they probably controlled for age.)

They tested self control by asking volunteers to take the Stroop Test (reading words like “red” and “blue” printed in ink that’s a different color.) Accurately reading the cards without saying the color of the ink requires self-control and impulse-suppression, and conservatives tended to do better on the test than liberals.

They also found that conservatives are better at dieting.

In other words, people who themselves have a lot of self-control expect everyone else to have just as much self-control as they have.

In general, I find that people tend to assume that everyone else works the same way they do–for example, that criminals “know right from wrong,” in the same way as non-criminals, but for some reason chose to commit crimes.

Likewise, it appears that people who don’t have a lot of willpower assume that everyone else also doesn’t have a lot of willpower–and that conservatives are therefore lying when they say they hauled themselves up via bootstraps. (It must be some other, magical force at play.)

I am reminded here of a conversation I had with a liberal acquaintance over the Mike Brown case. (I feel compelled to note, here, that I don’t talk to this person anymore because I decided they have very bad judgment in the company they keep. That was a tough decision, because they did provide an interesting window into dysfunction.)

Anyway, it occurred to me as we were speaking that this person’s position on the case was shaped largely by their ability to imagine themselves in Mike Brown’s shoes: they had done a bit of “harmless shoplifting” as a teenager, and certainly didn’t see themselves as someone who ought to be shot, by the police or otherwise.

This person is, in many ways, mildly criminal. They get in fights, smoke pot, and probably j-walk. Their relationships start fast and end in flames. (In their defense, they’re basically a nice person who cares about others; I hope they’re having a happy life.) They aren’t someone who deserves to have their life destroyed by imprisonment, but they are a little bit criminal.

Looking at my own perspective, I’ve never shoplifted–as a kid, if a vending machine gave me too much change, I returned it to the store. I tend to be overly rule-oriented–which explains why I harp so much on society’s lies. Lying bothers me.

At any rate, this mild criminality clearly affected my acquaintance’s opinion on the proper police response to crime; had they been a person who couldn’t imagine themselves stealing cigarettes (or cigars, or whatever,) they would not have identified so strongly with the situation.


I feel like this post comes down a little hard on liberals; in the interest of fairness, I feel compelled to note that these are all basically biological traits that people don’t have a ton of control over, and there are plenty of people in this world who have something good to contribute even though they have the self-control of a golden retriever in a room full of squeaky toys.

Has eliminating hookworms made people fatter?

Okay, yes, obviously when you take the gut parasites out of people, they tend to gain weight immediately after. That’s not exactly what I’m talking about.

First, let’s assume you come from a place where humans and hookworms have co-existed for a long, long time. The hookworms that just about everybody in the American South used to have appear to have come from Africa, so I think it safe to assume that hookworms have probably been infecting a lot of people in Africa for a long time. I don’t know how long–could be anywhere from a few hundred years, if they’d come from somewhere else or recently mutated or something, or could be tens of thousands or hundreds of thousands of years, if they’ve just always been hanging around. Let’s just go with tens of thousands, because if it wasn’t them, it was probably something else.

Over a few thousand years of constant infection, you’d expect to develop some sort of biological response to minimize the chances of death–that is, your ancestors would have evolved over time to be less susceptible to the parasite. Obviously not getting the parasite is one great way to avoid getting killed by it, but let’s assume that’s not an option.

Another solution would be to just absorb food differently–faster, say, or in a manner that circumvents the parts of the gut that are normally infected. Over time, humans and parasites might tend toward an equilibrium–humans stepping up their digestion to make up for what’s lost to the parasite.

Remove the parasite, and equilibrium is lost: suddenly the human starts gaining a lot of weight, especially compared to people from populations that did not adapt to the parasite.

That functional a gut isn’t needed anymore, but it might persist for a while if there are no counter-evolutionary pressures.

Scientific Nostalgia

So, I hear the Brontosaurus might return to the rolls of official dinosaurs, rather than oopsies. From Yale mag’s “The Brontosaurus is Back“:

Originally discovered and named by Yale paleontologist O. C. Marsh, Class of 1860, the “thunder lizard” was later determined to be the same as the ApatosaurusBut European researchers recently reexamined existing fossils and decided that Brontosaurus is in fact a separate species.”

Well, these things happen. I’m glad scientists are willing to revisit their data and revise their assumptions. Of course, I have no idea how much morphological difference is necessary between two skeletons before we start calling them different species, (by any sane metric, would a wolf hound and a chihuahua be considered the same species?) but I’m willing to trust the paleontologists on this one.

The interesting thing isn’t the reclassification itself, which gets down to somewhat dry and technical details about bone sizes and whatnot, but the fact that people–myself included!–have some sort of reaction to this news, eg:

Dinosaur lovers of a certain age are gratified. “I’m delighted,” says geology professor Jacques Gauthier, the Peabody’s curator of vertebrate paleontology and vertebrate zoology. “It’s what I learned as a kid.”

I’ve seem other people saying the same thing. Those of us who grew up with picture books with brontosauruses in them are happy at the news the brontosaurus is back–like finding an old friend again, or episodes of your favorite childhood show on YouTube. Perhaps you think, “Yes, now I can get a book of dinosaurs for my kids and share the animals I loved with my kids!”

Meanwhile some of us still cling to the notion that Pluto, despite its tiny size and eccentric orbit, really ought to be a planet. Even I feel a touch of anthropomorphizing pity for Pluto, even though I think from an objective POV that the current classification scheme is perfectly sensible.

Pluto is not the first round, rocky body to get named a planet and then demoted: in 1801, Giuseppe Piazzi discovered Ceres, a small, round, rocky body orbiting between Jupiter and Mars.

Finding a planet between Mars and Jupiter was intellectually satisfying on a number of levels, not least of which that it really seems like there ought to be one there. For the next 50 years, Ceres made it into the textbooks as our fifth planet–but by the 1860s, it had been demoted. A host of other, smaller bodies–some of them roundish–had also been discovered orbiting between Mars and Jupiter, and it was now clear that these were a special group of space bodies. They all got named asteroids, and Ceres went down the memory hole.

Ceres is smaller than Pluto, but they have much in common. As scientists discovered more small, Pluto-like bodies beyond Neptune’s orbit, the question of what is a planet revived. Should all non-moon, round bodies (those with enough gravity to make themselves round) be planets? That gets us to at least 13 planets, but possibly dozens–or hundreds–more.

There’s an obvious problem with having hundreds of planets, most of which are miniscule: kids would never learn ’em all. When you get right down to it, there are thousands of rocks and balls of ice and other such things zooming around the sun, and there’s a good reason most of them are known by numbers instead of names. You’ve got to prioritize data, and some sort of definition that would cut out the tiniest round ones was needed. Tiny Pluto, alas, ended up on the wrong side of the definition: not a planet.

Pluto is, of course, completely unaffected by a minor change in human nomenclature. And someday, like Ceres, Pluto may be largely forgotten by the public at large. In the meanwhile, there will still be nostalgia for the friendly science of one’s childhood.

Obvious Lies (Gypsies)

I remember it like it was, well, maybe a year ago. I was on my way to the children’s section at Borders and Noble when I spotted Isabella Fonseca’s Bury Me Standing: The Gypsies and Their Journey‘s bright yellow cover, beckoning to me from a nearby table. My parents claim I was in middle school; I think it was highschool. Either way, the book went home with me: my first ethnography.

As an American–and a clueless teenager–I knew virtually nothing about Gypsies. I didn’t know that Europeans view them negatively, as tramps and thieves. I held romantic American notions of free-spirited musical wanderers, sculpted by the Renaissance Faire and Disney’s The Hunchback of Nortre Dame.

Wait a minute, when did that shade of purple become popularly affordable?
Disney’s Esmeralda

You might have guessed that I really liked Esmeralda*, even though I thought the movie overall was all wrong for its target market.

*To be frank, kid-me didn’t differentiate much between different sorts of medium-toned people.

So I was really interested in Gypsies.

Short pause for terminology discussion: Yes, I am well aware of the terms Rom/Roma/Romani, which were discussed in the book. While I am perfectly happy to call anyone by whatever name they prefer, I really dislike euphemistic treadmills, because they end up as ways for snobbish people to signal their superiority over the hoi polloi who don’t yet know the newest words, and then the old terms become ways for other people to signal dislike of the group. I don’t like getting pressured into signaling one of these two things, and dispute that anyone has the right to force others into this dichotomy. “Gypsy” is not used as an insult or ethnic slur in the US, and it is the name which most Americans are familiar with; “Romani,” by contrast, is largely unknown. Therefore I use Gypsy, though I mean no insult.

Anyway, as you might expect, the ethnography did its best to cast its subject matter in a positive light–anthropologists feel an ethical obligation not to negatively impact the people who were nice enough to give them interviews and let them live in their homes and tell them about their culture, after all.

I have not revisited the book in years, so I don’t feel entitled to make many claims about its quality. Obviously teen-me liked it, but teen-me didn’t have much to compare it to. If you want to learn about the Gypsies, its probably as good a starting point as any, so long as you keep in mind that anthropologists tend to wear rose-tinted glasses.

One thing I remember well, though, was the author’s explanation for why Gypsy yards are so full of trash: Gypsies have strong notions of purity, and abhor touching anything unclean–including other people’s trash.

I was recently thinking back on this (not coincidentally, while cleaning up some trash that had gotten scattered down my street,) and realized, “Wait a minute! Everyone thinks trash is dirty! No one likes touching it! But you do it anyway, because otherwise your yard ends up full of trash.” Obviously I wash my hands after handling trash; so can everyone else. In retrospect, it seems so obvious.

So often we claim deep cultural significance for completely ordinary things. Trash ends up in people’s yards because they don’t bother to pick it up.

I confess: I felt like I’d been lied to–and like an idiot taking so long to notice.

Judging the gift by its cover: contents don’t matter

In my continuing quest to understand American gift-giving norms, I decided to test, (albeit informally,) my theory that the wrapping paper matters more than the present. Not that you can just give total crap and get away with it, (“Why is there a moldy shoe in this box?”) but that a mediocre gift paired with nice presentation will be appreciated more than a nice gift with bad presentation.

In the past, I have put a lot of (somewhat sporadic) effort into gifts, without feeling like they were much appreciated. I don’t mean that I received inadequate ego-stroking praise; I mean that I collected seedpods on a nearby mountain to grow flowers to give to a relative, and then when I returned again to their house, pot and flowers were gone and I never received so much as a thank you. Heck, I’ve been groused at because large, hand-crafted items arrived a week late because the months spent on them ran over by a few days.

You might say “fuck them,” but family is something I have to deal with, whether I want to or not.

Was it the dirty flowerpot? My habitual lateness?

So this time, I grabbed a mediocre item I happened to have lying around and didn’t want, and that the recipient already had. It’s not a terrible item–it’s in good enough condition to still fit the gift category. Ten minutes before time to go, I hauled out the craft supplies and wrapped it up. (I can wrap anything, including soccer balls. I suspect it’s a side effect of being good at mentally rotating objects.) The net effect of ribbons and bows and paper and sparkles looked pretty darn good–much better than my usual technique of wrapping things in newspaper.

And success! Gift was actually appreciated (I even received a thank you.)

From now on, I am not trying so damn hard.

Further Thoughts on IQ

(waning, I got up 4 hours early today.)

So as I was saying, withing the “normal” (average) range of IQ, most people who have the same number probably have about the same level of competency. But for outliers at the bottom end, it makes a difference how their low IQ came about, whether through natural genetic variation, or an unfortunate accident. A person’s level of impairment has a lot to do with their deviation from the IQ they should have been and the society at large–thus, a person who is naturally at 70IQ and comes from a society where they are average–where everyone is about 70–is perfectly functional, whereas a person who was supposed to have a 100 IQ but got dropped on their head and suffered brain damage is not going to be very functional.

An IQ below 80 is somewhere between borderline and severely impaired in the US; about 8 or 9% of people score below 80. About 2.5% score as severely impaired, below 70. People with IQs under 70 can’t be executed, and can only reach, at max, the intellectual level of a 12 yr old.* People below 60 have severe impairments and may never be able to live alone.

*Still putting them well ahead of the smartest apes–to be honest, people blathering on about how apes and dolphins are “as smart as humans” kind of get on my nerves.

And yet, many successful societies have survived without any ability to read or numbers beyond 3. These folks tend to score badly on IQ tests, but they survive just fine in their own societies, and I assume they are very happy with their lives the way they are. When people say that people in society Foo have average IQs around Bar, this is not the same as saying they are severely disabled–it’s a different kind of low IQ.

One thing I wonder about: let’s say someone was supposed to have an IQ of 140, due to genetics, but an accident interfered–say, they got dropped on their head–and they lost 40 IQ points. If they’d started at 100, they’d drop down to 60, which is pretty darn impaired. But they’ve only dropped to 100.

I assume that, even though they would test as “normal” on an IQ test, the loss of 40 IQ points would result in severe life impairment of some sort. I don’t know if the DSM/other ways of diagnosing people with disabilities could pick up on such people at all, or maybe they’d end up with some random diagnosis due to however the condition manifests. I also wonder how such a person would compare to someone who is just naturally low-IQ–would they have a better chance of thriving in a society that’s not as IQ-demanding? Or would they be just as dysfunctional there? Or would they have enough wits left about them to make up for the damage they’d suffered, and do fine in life?

One thing I suspect: growing up would be hell for them. Their siblings (assuming their 140 IQ parents manage to have any other kids,) would pursue advanced degrees and become doctors, lawyers, finance bros, or professors. Their parents probably are professors. Their parents would be pushing them to take advanced maths in elementary school (“Your brother could work with negative numbers when he was 4!) when they struggle with fractions, and they’d feel like shit for consistently being dumber than everyone around them. Their parents would shove them into remedial classes, because “average” looks suspiciously like “dumb a rock” when you’re really smart, and then force them to go to college whether they belong there or not.

Of course, this is what our society tries to do to everyone, under the assumption that enough… pre-k? tutoring? organic food? Sesame Street?… can turn anyone into a math professor.

Two Kinds of Dumb

The impression one gets from briefly skimming anything on IQ is that two people with the same IQ are (supposed to be) about equally smart. And within the normal range of IQs, for a roughly homogenous population, this is basically true.

But for those at the bottom of the IQ range, this is not true–because there are two ways to be dumb.

This is, obviously, an oversimplification. There are probably 7.2 billion ways to be dumb. But bear with me; I think the oversimplification is justified for the sake of conversation.

Some people have very low IQs because something bad happened to them–they were dropped on their heads as babies, ate a chunk of lead, or have an extra copy of chromosome 21. In these sorts of cases, had the accident not occurred, the individual would have been smarter. Their genetic potential, as it were, is not realized, and–with the exception of accidents affecting genetics–if they had kids, their kids would not share their low IQ, but reflect their parents’ genetic IQ.

The second kind of low IQ happens just because a person happened to have parents who weren’t very bright. There is a great range in people’s natures, IQs included, and some folk are on the low end. Just as Gnon made some people with 140 IQs, so Gnon made some people with 60. There is nothing wrong with these people, per se. They tend to be perfectly functional, at least in a society of other people like themselves–they just don’t score very well on IQ tests and tend not to get degrees in math.

People who have suffered some form of traumatic brain injury tend not to be very functional. They lose all kinds of functional brain processing stuff, not just the ability to predict which way the arrow is going to be rotated next. By contrast, people who are just not genetically blessed with smarts have built entire cultures.

To clarify, and use an extreme example, let’s think back to our most ancient ancestors. Since we can’t talk to them, let’s look at our cousins, the other great apes.

Koko the Gorilla can use about 1,000 signs and understands about 2,000 human words. Kanzi, a bonobo who can play Pacman, make stone tools, and cook his own dinner, understands about 3,000 words and 348 symbols. Both Kanzi and Koko are probably on the highly intelligent end for their species, because you don’t hear all that much about all of their classmates whom scientists have also been trying to teach to talk.

According to different websites on child development, a 2 year old human typically uses about 150-300 words; a 3 year old uses about 900-1000 words; a four year old uses 4,000-6,000 words.

As a general rule of thumb, human children, like great apes, understand more than they can say. So let’s say that Koko and Kanzi are about as smart as a human somewhere between 2 and 4 years old.

Now consider what would happen if you let your 3 year old (or someone else’s three year old if you don’t have one of your own) lose in the jungle: they’d die. Quickly. A three year old sucks at making omelets, their flint-knapping leaves much to be desired, and most of them aren’t even very good at Pacman. They’re also really bad at climbing trees and still need help opening their bananas, and they don’t really understand why they should avoid lions.

Bonobos and gorillas, by contrast, survive just fine in the jungle, so long as humans don’t shoot them.

A grown human with the intellect of a 4 yr old would not be functional, in human society or gorilla society. A gorilla with the language abilities of a 4 yr old human is a very smart and completely functional-in-gorilla-society gorilla. A gorilla is not dumb; a gorilla is exactly as smart as it is supposed to be.

Since the age when our ancestors diverged from the other primates, our species has been marked by increasing brain capacity and, presumably, intelligence. We’ve gone from doing one-to-one correspondence on our fingers to calculus and diff eq. Which means that a great many human societies have fallen somewhere in between–societies where the average person could count to ten; societies where the average person could do basic arithmetic; societies where the average person can do some abstract thought, but not a ton. Each society has its own level of organization and particular environment, requiring different mental toolkits.

But within our own society, we are not identical; we are not clones. So some of us are smart and some are dumb. But a person who is naturally dumb is generally more functional than someone who suffered some accident; there are different kinds of low IQ.

White Women’s Tears

Did you know that white women cry a lot? And that it annoys the crap out of black women? I didn’t, either, but “White women’s tears” is a thing SJWs and anti-racists actually talk about. Apparently black women hate it when white women cry.

Some quotes from around the internet:

Picture 2

Picture 5

I bet "white girl tears" also works

Serena Williams Drinks, Bathes In, And Makes Lemonade With White Tears:

Picture 1

NOTE: That subtitle is from the article, NOT from me. The person who wrote the article is claiming that Serena Williams makes tear-ade, not me.

Some people even write scholarly articles on the subject, eg, “When White Women Cry: How White Women’s Tears Oppress People of Color” (PDF)

Betcha didn’t know that crying is a form of oppression.

The general sentiment is not just that white women cry a lot, but also that they do it specifically to avoid getting blamed for racism–like “crocodile tears,” implying that the emotions behind them are not real:

No one likes it when you cry.
From the Urban Dictionary


Personally, I’d never given tears a second thought (other than not particularly liking them,) until I stumbled upon these sorts of comments. If I cry, it’s because of emotions, not because I’m trying to avoid blame.

But I suspect these black ladies are actually on to something. White women probably do cry more than black women. No, not to get out of being called racist; they cry because they’re biologically inclined to deal with conflict by crying.

Peter Frost speculates that whites, particularly white women, have been selected for neotenous traits, like pale skin and hair. (Frost has a ton of posts on the subject, so I’m not going to link to them all; you can just go read his blog if you want the details on his argument. I’m summarizing as relevant here; please forgive me if I’ve accidentally mixed in some arguments from West Hunter; it’s hard to keep my thoughts tagged with original authors for too long.)

This implies that white women are more neotenous than black women.

In many traditional African societies, women basically raise their children on their own/with the help of their kin networks, leading probably to a genetic preference for polygyny rather than monogamy and a personality type that we might characterize as strong, independent women. As I have previously noted, these societies happen to be very likely ancestral to much of the US’s African population.

By contrast, the cold, harsh winters of the northern European climate forced people into monogamous relationships in which the men did a lot of the back-breaking agricultural labor. Actually, Frost argues that it started before the advent of agriculture, but with mate selection on the ice age steppes:

It seems that this evolution took place between 20,000 and 10,000 years ago—long after modern humans had arrived in Europe some 40,000 years ago. This is when Europeans acquired their most visible features: white skin, multi-hued eyes and hair, and a more childlike face shape. In my opinion, such features were an adaptation not to weak sunlight but to a competitive mate market where men were scarce because they were less polygynous and more at risk of early death. This situation prevailed on the European steppe-tundra of the last ice age, whose high bio-productivity made possible a relatively large human population at the cost of a chronic oversupply of mateable women. The result was an unusually intense degree of sexual selection.

The problem with living in close proximity to men is that men are violent and aggressive. Luckily, men appear to already have a neural subroutine for decreasing aggression: look/act like a baby. Being around babies or small children appears to make men less aggressive/reduces the quantity of the sorts of hormones that lead to aggression, which leads in turn to men being less likely to murder their own children.

The development of neotenous features in women thus both increased the chances of men bonding with them and wanting to care for them (the neural subroutine for bonding with and caring for children hopefully does not require discussion,) and  decreased their chances of being victims of male aggression as proximity increased.

Crying is chiefly a characteristic of babies and small children. Grownups cry far less; men have historically prided themselves (for better or worse) on not crying.

Tears are, for white women, an effective means of decreasing white male aggression. I’m not saying they’re a conscious strategy (though of course sometimes they are.) I’m saying that for thousands of years, white women who cried more were less likely to die childless than women who cried less. So most of the time, when they start to cry, it’s unintentional–they can’t help it. That’s just how they’re wired.

(Note: You don’t have to buy Frost’s line of reasoning to believe that white women cry more than black women; Greg Cochran attributes neotenous white features to selective pressure on white men to behave themselves in large social groups–the idea that “civilization” “domesticated” people; Rushton links longer infancy and childhood and late retention of neotenous features to brain development. There are many potential explanations, but one thing that does seem to be widely agreed upon is that whites are more neotenous than blacks.)

White women cry even when race isn’t being discussed (though this may not be obvious if your only interaction with white women is via SJW/anti-racist communities, where racism and women dominate almost all discussions.) White women love “tear jerker” movies, romance novels and women’s fiction, and all designed to make them cry. They weep over the Pope’s latest pronouncements. They cry about their hair and their weight and their makeup and just about anything, really. They probably even cry if you criticize their lab results. (Of course, if it’s a white dude who’s done nothing to contribute to the advancement of humanity but win a Nobel Prize in Physiology / Medicine for his discoveries of protein molecules that control cell division who says that white women cry [let’s not kid ourselves about the skin tones of most women in science,] then you’re an evil sexist woman-oppressor, rather than a brave social justice warrior helping create our new and better future by calling out racist white women for derailing the conversation.)

Black women, by contrast, haven’t been subject to the same neotenizing pressures. They simply aren’t wired to cry at the drop of a hat. To them, white women are acting like whiny babies:

white people are whiny babies

and thus the contempt for “white women’s tears.”