Conspiracy Theory Theory

Our ancestors–probably long before they were even human–had to differentiate between living and non-living things. Living things can be eaten (and, importantly, can eat you back); non-living generally taste bad, can’t be eaten, and won’t try to eat you.

This is a task of such essential importance that I think it is basically an innate ability common to all thinking animals. Rabbits and fish need to distinguish between living things; both need to know whether the lump over there is a rock or a predator, after all. And we humans don’t have to explain to our children that cats and dogs are alive but tables aren’t. (Indeed, a defect in this ability that caused a person to regard tables as alive or other people as not is remarkable–and dangerous–when it happens.)

It is easy to divide most things into living and non-living. Living things move and grow; non-living things do not. Rabbits move. Rocks don’t. (Plants don’t move much, but they do grow. They’re also helpfully color-coded.)

But what about non-living things that nonetheless move and grow, like rivers or clouds? You can’t catch a cloud; you can’t eat it; but it still has a behavior that we can talk about, eg: “The clouds are building up on the horizon,” “The clouds moved in from the east,” “The clouds faded away.” Clouds and stars, sun and moon, rivers and tides all have their particular behaviors, unlike rocks, dirt, and fallen logs.

When it comes to mistakes along the living/non-living boundary, it is clearly better to mistakenly believe that something might be alive than to assume that it isn’t. If I mistake a rock for a lion, I will probably live until tomorrow, but if I mistake a lion for a rock, I very well may not. So we are probably inclined to treat anything that basically moves and behaves like a living thing as a living thing, at least until we have more information about it.

And thus our ancestors, who had no information about how or why the sun moved through the sky, were left to conclude that the sun was either a conscious being that moved because it wanted to, or was at least controlled by such a being. Same for the moon and the stars, the rivers and tides.

Moreover, these being were clearly more powerful than men, especially ancient men. We cannot catch the sun; we live at mercy of the wind and the rain. Rivers can sweep us away and sudden storms dash boats to pieces. We live or die according to their whims.

So ancient man believed these things were sentient, called them “gods” (or devils) and attempted to placate them through sacrifice and prayer.

Centuries of scientific research have gradually uncovered the secrets of the universe. We’ve figured out why the sun appears to move as it does, why clouds form, and that frogs aren’t actually generated by mud. We’ve also figured out that the “influence” (influenza, in Italian) of the stars doesn’t actually cause sickness, though the name persists.

We know better rationally, but the instinct to ascribe personhood to certain inanimate objects still persists: it’s why programs like Thomas the Tank Engine are so popular with children. Trains move, therefore trains are alive and must have feelings and personalities. It’s why I have to remind myself occasionally that Pluto is an icy space rock and doesn’t actually feel sad about being demoted from planet to planetoid.

If something acts like a conscious thing and talks like a conscious thing, we’re still liable to treat it like a conscious thing–even if we know it’s not.

Today, the vast implacable forces that rain down on people’s lives are less the weather and more often organizations like the IRS or the local grocery store. These organizations clearly “do” things on purpose, because they were set up with that intention. The grocery store sells groceries. The IRS audits your taxes. Wendy’s posts on Twitter. The US invades other countries.

If organizations act like conscious entities, then it is natural for people to think of them as conscious entities, even though we know they are actually made of hundreds or thousands of individual people (many of whom don’t even like their jobs) executing to various degrees of accuracy the instructions and procedures laid down for them by their bosses and predecessors for how to get things done. The bag boy at the grocery store does not think about lofty matters like “how to get food from the farm to the table,” he merely puts the groceries in the bags, with an eye toward not breaking the eggs and not using too many bags.

Human institutions often become so big that no one has effective control over them anymore. One side has no idea how the other side is operating. An organization may forget its original purpose entirely, eg, MTV’s transition away from music videos and The Learning’ Channel’s away from anything educational.

When this happens, their behavior begins to look erratic. Why would an organization do anything counter to its stated purpose? The answer that it’s because no one is actually running the show, the entire organization is just a lose network of people all following the instructions of their little part without any oversight or ability to affect the overall whole and the entire machinery has gone completely out of kilter is dissatisfying to people; since the organization looks like a conscious thing, then it must be a conscious thing, and they must therefore have reasons for their behavior.

Trying to explain organizations’ behaviors in terms of conscious intent gets us quickly into the realm of conspiracy theories. For example, I am sure you have all heard the claim that, “Cheap cancer cures exist, but doctors don’t want you to know about them because they want to keep you sick for longer so they can sell you more expensive medicines.” Well, this is kind of half-true. The true part is that the medical system is biased toward more expensive medications, but not because doctors make more from them. (If you could prove that you can cure cancer with, say, a mega-dose of Vitamin C, the vitamin companies would be absolutely thrilled to bring “Cancer Bustin’ Vit C” to market.) The not-true part is the idea that this is all being done intentionally.

Doctors can only prescribe medications that have official FDA approval. This keeps patients safe from quackery and keeps doctors safe(er) from the possibility of getting sued if their treatments don’t work or have unexpected side effects.

FDA approval is difficult to get. The process requires long and rigorous medical trials to ensure that medications are safe and effective. Long, rigorous medical trials are expensive.

As a result, pharmaceutical companies only want to spend millions of dollars on medical trials for drugs that they think they have the potential to make millions of dollars. Any drug company that tried spending millions of dollars on cheap treatments that they can’t sell for millions of dollars would quickly go out of business.

To sum:

  1. Doctors can only prescribe FDA-approved treatments
  2. The FDA requires long, rigorous trials to make sure treatments are safe
  3. Long trials are expensive
  4. Drug companies therefore prefer to do expensive trials only on expensive drugs they can actually make money on.

None designed this system with the intention of keeping cheap medical treatments off the market because no one designed the system in the first place. It was assembled bit by bit over the course of a hundred years by different people from different organizations with different interests. It is the sum total of thousands (maybe millions) of people’s decisions, most of which made sense at the time.

That said, the system actually does make it harder for patients to get cheap medical treatments. The fact that this consequence is unintended does not make it any less real (or important).

There are, unfortunately, plenty of people who only focus on each particular step in the process, decide that each step is justified, and conclude that the net results must therefore also be justified without ever looking at those results. This is kind of the opposite of over-ascribing intention to organizations, a failure to acknowledge that unintended, emergent behavior of organizations exist and have real consequences. These sorts of people will generally harp on the justification for particular rules and insist that these justifications are so important that they override any greater concern. For example, they will insist that it is vital that drug trials cost millions of dollars in order to protect patients from potential medical side effects, while ignoring patients who died because drug companies couldn’t afford to develop treatments for their disorder.

But back to conspiracy theories: when organizations act like conscious creatures, it is very natural to think that they actually are conscious or at least are controlled by by conscious, intentional beings. It’s much more satisfying, frankly, than just assuming that they are made up of random people who actually have no idea what they’re doing.

Now that I think about it, this is all very fundamental to the principle ideas underlying this blog: organizations act like conscious creatures and are subject to many of the same biological rules as conscious creatures, but do not possess true consciousness.

Businesses, for example, must make enough money to cover their operating expenses, just as animals must eat enough calories to power their bodies. If one restaurant produces tasty food more efficiently than its competitor, thus making more money, then it will tend to outcompete and replace that competitor. Restaurants that cannot make enough money go out of business quickly.

Similarly, countries must procure enough food/energy to feed their people, or mass starvation will occur. They must also be strong enough to defend themselves against other countries, just as animals have to make sure other animals don’t eat them.

Since these organizations act like conscious creatures, it is a convenient shorthand to talk about them as though they were conscious. We say things like, “The US invaded Vietnam,” even though the US as a whole never decided that it would be a good idea to invade Vietnam and then went and did so. (The president has a major role in US foreign policy, but he doesn’t act alone.)

Most systems/organizations don’t have anyone that’s truly in charge. We can talk about “the American medical system,” but there is no one who runs the American medical system. We can talk about “the media,” but there is no one in charge of the media; no one decided one day that we were switching from paper newspapers to online click-bait. We talk about “society,” but no one is in charge of society.

This is not to say that organizations never have anyone in charge: tons of them do. Small businesses and departments in particular tend to have someone running them and goals they are trying to accomplish. I’m also not saying that conspiracies never happen: of course they do. These are just general observations about the general behavior of organized human groups: they can act like living creatures and are subject to many of the same rules as living creatures, which makes us inclined to think of them as conscious even when they aren’t.

Testosterone metabolization, autism, male brain, and female identity

I began this post intending to write about testosterone metabolization in autism and possible connections with transgender identity, but realized halfway through that I didn’t actually know whether the autist-trans connection was primarily male-to-female or female-to-male. I had assumed that the relevant population is primarily MtF because both autists and trans people are primarily male, but both groups do have female populations that are large enough to contribute significantly. Here’s a sample of the data I’ve found so far:

A study conducted by a team of British scientists in 2012 found that of a pool of individuals not diagnosed on the autism spectrum, female-to-male (FTM) transgender people have higher rates of autistic features than do male-to-female (MTF) transgender people or cisgender males and females. Another study, which looked at children and adolescents admitted to a gender identity clinic in the Netherlands, found that almost 8 percent of subjects were also diagnosed with ASD.

Note that both of these studies are looking at trans people and assessing whether or not they have autism symptoms, not looking at autists and asking if they have trans symptoms. Given the characterization of autism as “extreme male brain” and that autism is diagnosed in males at about 4x the rate of females, the fact that there is some overlap between “women who think they think like men” and “traits associated with male thought patterns” is not surprising.

If the reported connection between autism and trans identity is just “autistic women feel like men,” that’s pretty non-mysterious and I just wasted an afternoon.

Though the data I have found so far still does not look directly at autists and ask how many of them have trans symptoms, the wikipedia page devoted to transgender and transsexual computer programmers lists only MtFs and no FtMs. Whether this is a pattern throughout the wider autism community, it definitely seems to be a thing among programmers. (Relevant discussion.)

So, returning to the original post:

Autism contains an amusing contradiction: on the one hand, autism is sometimes characterized as “extreme male brain,” and on the other hand, (some) autists (may be) more likely than neurotypicals to self-identify as transwomen–that is, biological men who see themselves as women. This seems contradictory: if autists are more masculine, mentally, than the average male, why don’t they identify as football players, army rangers, or something else equally masculine? For that matter, why isn’t a group with “extreme male brains” regarded as more, well, masculine?

(And if autists have extreme male brains, does that mean football players don’t? Do football players have more feminine brains than autists? Do colorless green ideas sleep furiously? DO WORDS MEAN?)

*Ahem*

In favor of the “extreme male brain” hypothesis, we have evidence that testosterone is important for certain brain functions, like spacial recognition, we have articles like this one: Testosterone and the brain:

Gender differences in spatial recognition, and age-related declines in cognition and mood, point towards testosterone as an important modulator of cerebral functions. Testosterone appears to activate a distributed cortical network, the ventral processing stream, during spatial cognition tasks, and addition of testosterone improves spatial cognition in younger and older hypogonadal men. In addition, reduced testosterone is associated with depressive disorders.

(Note that women also suffer depression at higher rates than men.)

So people with more testosterone are better at spacial cognition and other tasks that “autistic” brains typically excel at, and brains with less testosterone tend to be moody and depressed.

But hormones are tricky things. Where do they come from? Where do they go? How do we use them?

According to Wikipedia:

During the second trimester [of pregnancy], androgen level is associated with gender formation.[13] This period affects the femininization or masculinization of the fetus and can be a better predictor of feminine or masculine behaviours such as sex typed behaviour than an adult’s own levels. A mother’s testosterone level during pregnancy is correlated with her daughter’s sex-typical behavior as an adult, and the correlation is even stronger than with the daughter’s own adult testosterone level.[14]

… Early infancy androgen effects are the least understood. In the first weeks of life for male infants, testosterone levels rise. The levels remain in a pubertal range for a few months, but usually reach the barely detectable levels of childhood by 4–6 months of age.[15][16] The function of this rise in humans is unknown. It has been theorized that brain masculinization is occurring since no significant changes have been identified in other parts of the body.[17] The male brain is masculinized by the aromatization of testosterone into estrogen, which crosses the blood–brain barrier and enters the male brain, whereas female fetuses have α-fetoprotein, which binds the estrogen so that female brains are not affected.[18]

(Bold mine.)

Let’s re-read that: the male brain is masculinized by the aromatization of testosterone into estrogen.

If that’s not a weird sentence, I don’t know what is.

Let’s hop over to the scientific literature, eg, Estrogen Actions in the Brain and the Basis for Differential Action in Men and Women: A Case for Sex-Specific Medicines:

Burgeoning evidence now documents profound effects of estrogens on learning, memory, and mood as well as neurodevelopmental and neurodegenerative processes. Most data derive from studies in females, but there is mounting recognition that estrogens play important roles in the male brain, where they can be generated from circulating testosterone by local aromatase enzymes or synthesized de novo by neurons and glia. Estrogen-based therapy therefore holds considerable promise for brain disorders that affect both men and women. However, as investigations are beginning to consider the role of estrogens in the male brain more carefully, it emerges that they have different, even opposite, effects as well as similar effects in male and female brains. This review focuses on these differences, including sex dimorphisms in the ability of estradiol to influence synaptic plasticity, neurotransmission, neurodegeneration, and cognition, which, we argue, are due in a large part to sex differences in the organization of the underlying circuitry.

Hypothesis: the way testosterone works in the brain (where we both do math and “feel” male or female) and the way it works in the muscles might be very different.

Do autists actually differ from other people in testosterone (or other hormone) levels?

In Elevated rates of testosterone-related disorders in women with autism spectrum conditions, researchers surveyed autistic women and mothers of autistic children about various testosterone-related medical conditions:

Compared to controls, significantly more women with ASC [Autism Spectrum Conditions] reported (a) hirsutism, (b) bisexuality or asexuality, (c) irregular menstrual cycle, (d) dysmenorrhea, (e) polycystic ovary syndrome, (f) severe acne, (g) epilepsy, (h) tomboyism, and (i) family history of ovarian, uterine, and prostate cancers, tumors, or growths. Compared to controls, significantly more mothers of ASC children reported (a) severe acne, (b) breast and uterine cancers, tumors, or growths, and (c) family history of ovarian and uterine cancers, tumors, or growths.

Androgenic Activity in Autism has an unfortunately low number of subjects (N=9) but their results are nonetheless intriguing:

Three of the children had exhibited explosive aggression against others (anger, broken objects, violence toward others). Three engaged in self-mutilations, and three demonstrated no aggression and were in a severe state of autistic withdrawal. The appearance of aggression against others was associated with having fewer of the main symptoms of autism (autistic withdrawal, stereotypies, language dysfunctions).

Three of their subjects (they don’t say which, but presumably from the first group,) had abnormally high testosterone levels (including one of the girls in the study.) The other six subjects had normal androgen levels.

This is the first report of an association between abnormally high androgenic activity and aggression in subjects with autism. Although a previously reported study did not find group mean elevations in plasma testosterone in prepubertal autistic subjects (4), it appears here that in certain autistic individuals, especially those in puberty, hyperandrogeny may play a role in aggressive behaviors. Also, there appear to be distinct clinical forms of autism that are based on aggressive behaviors and are not classified in DSM-IV. Our preliminary findings suggest that abnormally high plasma testosterone concentration is associated with aggression against others and having fewer of the main autistic symptoms.

So, some autists have do have abnormally high testosterone levels, but those same autists are less autistic, overall, than other autists. More autistic behavior, aggression aside, is associated with normal hormone levels. Probably.

But of course that’s not fetal or early infancy testosterone levels. Unfortunately, it’s rather difficult to study fetal testosterone levels in autists, as few autists were diagnosed as fetuses. However, Foetal testosterone and autistic traits in 18 to 24-month-old children comes close:

Levels of FT [Fetal Testosterone] were analysed in amniotic fluid and compared with autistic traits, measured using the Quantitative Checklist for Autism in Toddlers (Q-CHAT) in 129 typically developing toddlers aged between 18 and 24 months (mean ± SD 19.25 ± 1.52 months). …

Sex differences were observed in Q-CHAT scores, with boys scoring significantly higher (indicating more autistic traits) than girls. In addition, we confirmed a significant positive relationship between FT levels and autistic traits.

I feel like this is veering into “we found that boys score higher on a test of male traits than girls did” territory, though.

In Polymorphisms in Genes Involved in Testosterone Metabolism in Slovak Autistic Boys, researchers found:

The present study evaluates androgen and estrogen levels in saliva as well as polymorphisms in genes for androgen receptor (AR), 5-alpha reductase (SRD5A2), and estrogen receptor alpha (ESR1) in the Slovak population of prepubertal (under 10 years) and pubertal (over 10 years) children with autism spectrum disorders. The examined prepubertal patients with autism, pubertal patients with autism, and prepubertal patients with Asperger syndrome had significantly increased levels of salivary testosterone (P < 0.05, P < 0.01, and P < 0.05, respectively) in comparison with control subjects. We found a lower number of (CAG)n repeats in the AR gene in boys with Asperger syndrome (P < 0.001). Autistic boys had an increased frequency of the T allele in the SRD5A2 gene in comparison with the control group. The frequencies of T and C alleles in ESR1 gene were comparable in all assessed groups.

What’s the significance of CAG repeats in the AR gene? Apparently they vary inversely with sensitivity to androgens:

Individuals with a lower number of CAG repeats exhibit higher AR gene expression levels and generate more functional AR receptors increasing their sensitivity to testosterone…

Fewer repeats, more sensitivity to androgens. The SRD5A2 gene is also involved in testosterone metabolization, though I’m not sure exactly what the T allele does relative to the other variants.

But just because there’s a lot of something in the blood (or saliva) doesn’t mean the body is using it. Diabetics can have high blood sugar because their bodies lack the necessary insulin to move the sugar from the blood, into their cells. Fewer androgen receptors could mean the body is metabolizing testosterone less effectively, which in turn leaves more of it floating in the blood… Biology is complicated.

What about estrogen and the autistic brain? That gets really complicated. According to Sex Hormones in Autism: Androgens and Estrogens Differentially and Reciprocally Regulate RORA, a Novel Candidate Gene for Autism:

Here, we show that male and female hormones differentially regulate the expression of a novel autism candidate gene, retinoic acid-related orphan receptor-alpha (RORA) in a neuronal cell line, SH-SY5Y. In addition, we demonstrate that RORA transcriptionally regulates aromatase, an enzyme that converts testosterone to estrogen. We further show that aromatase protein is significantly reduced in the frontal cortex of autistic subjects relative to sex- and age-matched controls, and is strongly correlated with RORA protein levels in the brain.

If autists are bad at converting testosterone to estrogen, this could leave extra testosterone floating around in their blood… but doens’t explain their supposed “extreme male brain.” Here’s another study on the same subject, since it’s confusing:

Comparing the brains of 13 children with and 13 children without autism spectrum disorder, the researchers found a 35 percent decrease in estrogen receptor beta expression as well as a 38 percent reduction in the amount of aromatase, the enzyme that converts testosterone to estrogen.

Levels of estrogen receptor beta proteins, the active molecules that result from gene expression and enable functions like brain protection, were similarly low. There was no discernable change in expression levels of estrogen receptor alpha, which mediates sexual behavior.

I don’t know if anyone has tried injecting RORA-deficient mice with estrogen, but here is a study about the effects of injecting reelin-deficient mice with estrogen:

The animals in the new studies, called ‘reeler’ mice, have one defective copy of the reelin gene and make about half the amount of reelin compared with controls. …

Reeler mice with one faulty copy serve as a model of one of the most well-established neuro-anatomical abnormalities in autism. Since the mid-1980s, scientists have known that people with autism have fewer Purkinje cells in the cerebellum than normal. These cells integrate information from throughout the cerebellum and relay it to other parts of the brain, particularly the cerebral cortex.

But there’s a twist: both male and female reeler mice have less reelin than control mice, but only the males lose Purkinje cells. …

In one of the studies, the researchers found that five days after birth, reeler mice have higher levels of testosterone in the cerebellum compared with genetically normal males3.

Keller’s team then injected estradiol — a form of the female sex hormone estrogen — into the brains of 5-day-old mice. In the male reeler mice, this treatment increases reelin levels in the cerebellum and partially blocks Purkinje cell loss. Giving more estrogen to female reeler mice has no effect — but females injected with tamoxifen, an estrogen blocker, lose Purkinje cells. …

In another study, the researchers investigated the effects of reelin deficiency and estrogen treatment on cognitive flexibility — the ability to switch strategies to solve a problem4. …

“And we saw indeed that the reeler mice are slower to switch. They tend to persevere in the old strategy,” Keller says. However, male reeler mice treated with estrogen at 5 days old show improved cognitive flexibility as adults, suggesting that the estrogen has a long-term effect.

This still doesn’t explain why autists would self-identify as transgender women (mtf) at higher rates than average, but it does suggest that any who do start hormone therapy might receive benefits completely independent of gender identity.

Let’s stop and step back a moment.

Autism is, unfortunately, badly defined. As the saying goes, if you’ve met one autist, you’ve met one autist. There are probably a variety of different, complicated things going on in the brains of different autists simply because a variety of different, complicated conditions are all being lumped together under a single label. Any mental disability that can include both non-verbal people who can barely dress and feed themselves and require lifetime care and billionaires like Bill Gates is a very badly defined condition.

(Unfortunately, people diagnose autism with questionnaires that include questions like “Is the child pedantic?” which could be equally true of both an autistic child and a child who is merely very smart and has learned more about a particular subject than their peers and so is responding in more detail than the adult is used to.)

The average autistic person is not a programmer. Autism is a disability, and the average diagnosed autist is pretty darn disabled. Among the people who have jobs and friends but nonetheless share some symptoms with formally diagnosed autists, though, programmer and the like appear to be pretty popular professions.

Back in my day, we just called these folks nerds.

Here’s a theory from a completely different direction: People feel the differences between themselves and a group they are supposed to fit into and associate with a lot more strongly than the differences between themselves and a distant group. Growing up, you probably got into more conflicts with your siblings and parents than with random strangers, even though–or perhaps because–your family is nearly identical to you genetically, culturally, and environmentally. “I am nothing like my brother!” a man declares, while simultaneously affirming that there is a great deal in common between himself and members of a race and culture from the other side of the planet. Your  coworker, someone specifically selected for the fact that they have similar mental and technical aptitudes and training as yourself, has a distinct list of traits that drive you nuts, from the way he staples papers to the way he pronounces his Ts, while the women of an obscure Afghan tribe of goat herders simply don’t enter your consciousness.

Nerds, somewhat by definition, don’t fit in. You don’t worry much about fitting into a group you’re not part of in the fist place–you probably don’t worry much about whether or not you fit in with Melanesian fishermen–but most people work hard at fitting in with their own group.

So if you’re male, but you don’t fit in with other males (say, because you’re a nerd,) and you’re down at the bottom of the highschool totem pole and feel like all of the women you’d like to date are judging you negatively next to the football players, then you might feel, rather strongly, the differences between you and other males. Other males are aggressive, they call you a faggot, they push you out of their spaces and threaten you with violence, and there’s very little you can do to respond besides retreat into your “nerd games.”

By contrast, women are polite to you, not aggressive, and don’t aggressively push you out of their spaces. Your differences with them are much less problematic, so you feel like you “fit in” with them.

(There is probably a similar dynamic at play with American men who are obsessed with anime. It’s not so much that they are truly into Japanese culture–which is mostly about quietly working hard–as they don’t fit in very well with their own culture.) (Note: not intended as a knock on anime, which certainly has some good works.)

And here’s another theory: autists have some interesting difficulties with constructing categories and making inferences from data. They also have trouble going along with the crowd, and may have fewer “mirror neurons” than normal people. So maybe autists just process the categories of “male” and “female” a little differently than everyone else, and in a small subset of autists, this results in trans identity.*

And another: maybe there are certain intersex disorders which result in differences in brain wiring/organization. (Yes, there are real interesx disorders, like Klinefelter’s, in which people have XXY chromosomes instead of XX or XY.) In a small set of cases, these unusually wired brains may be extremely good at doing certain tasks (like programming) resulting people who are both “autism spectrum” and “trans”. This is actually the theory I’ve been running with for years, though it is not incompatible with the hormonal theories discussed above.

But we are talking small: trans people of any sort are extremely rare, probably on the order of <1/1000. Even if autists were trans at 8 times the rates of non-autists, that’s still only 8/1000 or 1/125. Autists themselves are pretty rare (estimates vary, but the vast majority of people are not autistic at all,) so we are talking about a very small subset of a very small population in the first place. We only notice these correlations at all because the total population has gotten so huge.

Sometimes, extremely rare things are random chance.

Is Crohn’s Disease Tuberculosis of the Intestines?

Source: Rise in Crohn’s Disease admission rates, Glasgow

Crohn‘s is an inflammatory disease of the digestive tract involving diarrhea, vomiting internal lesions, pain, and severe weight loss. Left untreated, Crohn’s can lead to death through direct starvation/malnutrition, infections caused by the intestinal walls breaking down and spilling feces into the rest of the body, or a whole host of other horrible symptoms, like pyoderma gangrenosum–basically your skin just rotting off.

Crohn’s disease has no known cause and no cure, though several treatments have proven effective at putting it into remission–at least temporarily.

The disease appears to be triggered by a combination of environmental, bacterial, and genetic factors–about 70 genes have been identified so far that appear to contribute to an individual’s chance of developing Crohn’s, but no gene has been found yet that definitely triggers it. (The siblings of people who have Crohn’s are more likely than non-siblings to also have it, and identical twins of Crohn’s patients have a 55% chance of developing it.) A variety of environmental factors, such as living in a first world country, (parasites may be somewhat protective against the disease), smoking, or eating lots of animal protein also correlate with Crohn’s, but since only 3.2/1000 people even in the West have it’s, these obviously don’t trigger the disease in most people.

Crohn’s appears to be a kind of over-reaction of the immune system, though not specifically an auto-immune disorder, which suggests that a pathogen of some sort is probably involved. Most people are probably able to fight off this pathogen, but people with a variety of genetic issues may have more trouble–according to Wikipedia, “There is considerable overlap between susceptibility loci for IBD and mycobacterial infections.[62] ” Mycobacteria are a genus of of bacteria that includes species like tuberculosis and leprosy. A variety of bacteria–including specific strains of e coli, yersinia, listeria, and Mycobacterium avium subspecies paratuberculosis–are found in the intestines of Crohn’s suffers at higher rates than in the intestines of non-sufferers (intestines, of course, are full of all kinds of bacteria.)

Source: The Gutsy Group

Crohn’s treatment depends on the severity of the case and specific symptoms, but often includes a course of antibiotics, (especially if the patient has abscesses,) tube feeding (in acute cases where the sufferer is having trouble digesting food,) and long-term immune-system suppressants such as prednisone, methotrexate, or infliximab. In severe cases, damaged portions of the intestines may be cut out. Before the development of immunosuppressant treatments, sufferers often progressively lost more and more of their intestines, with predictably unpleasant results, like no longer having a functioning colon. (70% of Crohn’s sufferers eventually have surgery.)

A similar disease, Johne’s, infects cattle. Johne’s is caused by Mycobacterium avium subspecies paratuberculosis, (hereafter just MAP). MAP typically infects calves at birth, transmitted via infected feces from their mothers, incubates for two years, and then manifests as diarrhea, malnutrition, dehydration, wasting, starvation, and death. Luckily for cows, there’s a vaccine, though any infectious disease in a herd is a problem for farmers.

If you’re thinking that “paratuberculosis” sounds like “tuberculosis,” you’re correct. When scientists first isolated it, they thought the bacteria looked rather like tuberculosis, hence the name, “tuberculosis-like.” The scientists’ instincts were correct, and it turns out that MAP is in the same bacterial genus as tuberculosis and leprosy (though it may be more closely related to leprosy than TB.) (“Genus” is one step up from “species;” our species is “homo Sapiens;” our genus, homo, we share with homo Neanderthalis, homo Erectus, etc, but chimps and gorillas are not in the homo genus.)

A: Crohn’s Disease in Humans. Figure B: Johne’s Disease in Animals. Greenstein Lancet Infectious Disease, 2004, H/T Human Para Foundation

The intestines of cattle who have died of MAP look remarkably like the intestines of people suffering from advanced Crohn’s disease.

MAP can actually infect all sorts of mammals, not just cows, it’s just more common and problematic in cattle herds. (Sorry, we’re not getting through this post without photos of infected intestines.)

So here’s how it could work:

The MAP bacteria–possibly transmitted via milk or meat products–is fairly common and infects a variety of mammals. Most people who encounter it fight it off with no difficulty (or perhaps have a short bout of diarrhea and then recover.)

A few people, though, have genetic issues that make it harder for them to fight off the infection. For example, Crohn’s sufferers produce less intestinal mucus, which normally acts as a barrier between the intestines and all of the stuff in them.

Interestingly, parasite infections can increase intestinal mucus (some parasites feed on mucus), which in turn is protective against other forms of infection; decreasing parasite load can increase the chance of other intestinal infections.

Once MAP enters the intestinal walls, the immune system attempts to fight it off, but a genetic defect in microphagy results in the immune cells themselves getting infected. The body responds to the signs of infection by sending more immune cells to fight it, which subsequently also get infected with MAP, triggering the body to send even more immune cells. These lumps of infected cells become the characteristic ulcerations and lesions that mark Crohn’s disease and eventually leave the intestines riddled with inflamed tissue and holes.

The most effective treatments for Crohn’s, like Infliximab, don’t target infection but the immune system. They work by interrupting the immune system’s feedback cycle so that it stops sending more cells to the infected area, giving the already infected cells a chance to die. It doesn’t cure the disease, but it does give the intestines time to recover.

Unfortunately, this means infliximab raises your chance of developing TB:

There were 70 reported cases of tuberculosis after treatment with infliximab for a median of 12 weeks. In 48 patients, tuberculosis developed after three or fewer infusions. … Of the 70 reports, 64 were from countries with a low incidence of tuberculosis. The reported frequency of tuberculosis in association with infliximab therapy was much higher than the reported frequency of other opportunistic infections associated with this drug. In addition, the rate of reported cases of tuberculosis among patients treated with infliximab was higher than the available background rates.

because it is actively suppressing the immune system’s ability to fight diseases in the TB family.

Luckily, if you live in the first world and aren’t in prison, you’re unlikely to catch TB–only about 5-10% of the US population tests positive for TB, compared to 80% in many African and Asian countries. (In other words, increased immigration from these countries will absolutely put Crohn’s suffers at risk of dying.)

There are a fair number of similarities between Crohn’s, TB, and leprosy is that they are all very slow diseases that can take years to finally kill you. By contrast, other deadly diseases, like smallpox, cholera, and yersinia pestis (plague), spread and kill extremely quickly. Within about two weeks, you’ll definitely know if your plague infection is going to kill you or not, whereas you can have leprosy for 20 years before you even notice it.

TB, like Crohn’s, creates granulomas:

Tuberculosis is classified as one of the granulomatous inflammatory diseases. Macrophages, T lymphocytes, B lymphocytes, and fibroblasts aggregate to form granulomas, with lymphocytes surrounding the infected macrophages. When other macrophages attack the infected macrophage, they fuse together to form a giant multinucleated cell in the alveolar lumen. The granuloma may prevent dissemination of the mycobacteria and provide a local environment for interaction of cells of the immune system.[63] However, more recent evidence suggests that the bacteria use the granulomas to avoid destruction by the host’s immune system. … In many people, the infection waxes and wanes.

Crohn’s also waxes and wanes. Many sufferers experience flare ups of the disease, during which they may have to be hospitalized, tube fed, and put through another round of antibiotics or sectioning (surgical removal of the intestines) before they improve–until the disease flares up again.

Leprosy is also marked by lesions, though of course so are dozens of other diseases.

Note: Since Crohn’s is a complex, multi-factorial disease, there may be more than one bacteria or pathogen that could infect people and create similar results. Alternatively, Crohn’s sufferers may simply have intestines that are really bad at fighting off all sorts of diseases, as a side effect of Crohn’s, not a cause, resulting in a variety of unpleasant infections.

The MAP hypothesis suggests several possible treatment routes:

  1. Improving the intestinal mucus, perhaps via parasites or medicines derived from parasites
  2. Improving the intestinal microbe balance
  3. Antibiotics that treat Map
  4. Anti-MAP vaccine similar to the one for Johne’s disease in cattle
  5. Eliminate map from the food supply

Here’s an article about the parasites and Crohn’s:

To determine how the worms could be our frenemies, Cadwell and colleagues tested mice with the same genetic defect found in many people with Crohn’s disease. Mucus-secreting cells in the intestines malfunction in the animals, reducing the amount of mucus that protects the gut lining from harmful bacteria. Researchers have also detected a change in the rodents’ microbiome, the natural microbial community in their guts. The abundance of one microbe, an inflammation-inducing bacterium in the Bacteroides group, soars in the mice with the genetic defect.

The researchers found that feeding the rodents one type of intestinal worm restored their mucus-producing cells to normal. At the same time, levels of two inflammation indicators declined in the animals’ intestines. In addition, the bacterial lineup in the rodents’ guts shifted, the team reports online today in Science. Bacteroides’s numbers plunged, whereas the prevalence of species in a different microbial group, the Clostridiales, increased. A second species of worm also triggers similar changes in the mice’s intestines, the team confirmed.

To check whether helminths cause the same effects in people, the scientists compared two populations in Malaysia: urbanites living in Kuala Lumpur, who harbor few intestinal parasites, and members of an indigenous group, the Orang Asli, who live in a rural area where the worms are rife. A type of Bacteroides, the proinflammatory microbes, predominated in the residents of Kuala Lumpur. It was rarer among the Orang Asli, where a member of the Clostridiales group was plentiful. Treating the Orang Asli with drugs to kill their intestinal worms reversed this pattern, favoring Bacteroides species over Clostridiales species, the team documented.

This sounds unethical unless they were merely tagging along with another team of doctors who were de-worming the Orangs for normal health reasons and didn’t intend on potentially inflicting Crohn’s on people. Nevertheless, it’s an interesting study.

At any rate, so far they haven’t managed to produce an effective medicine from parasites, possibly in part because people think parasites are icky.

But if parasites aren’t disgusting enough for you, there’s always the option of directly changing the gut bacteria: fecal microbiota transplants (FMT).  A fecal transplant is exactly what it sounds like: you take the regular feces out of the patient and put in new, fresh feces from an uninfected donor. (When your other option is pooping into a bag for the rest of your life because your colon was removed, swallowing a few poop pills doesn’t sound so bad.) EG, Fecal microbiota transplant for refractory Crohn’s:

Approximately one-third of patients with Crohn’s disease do not respond to conventional treatments, and some experience significant adverse effects, such as serious infections and lymphoma, and many patients require surgery due to complications. .. Herein, we present a patient with Crohn’s colitis in whom biologic therapy failed previously, but clinical remission and endoscopic improvement was achieved after a single fecal microbiota transplantation infusion.

Here’s a Chinese doctor who appears to have good success with FMTs to treat Crohn’s–improvement in 87% of patients one month after treatment and remission in 77%, though the effects may wear off over time. Note: even infliximab, considered a “wonder drug” for its amazing abilities, only works for about 50-75% of patients, must be administered via regular IV infusions for life (or until it stops working,) costs about $20,000 a year per patient, and has some serious side effects, like cancer. If fecal transplants can get the same results, that’s pretty good.

Little known fact: “In the United States, the Food and Drug Administration (FDA) has regulated human feces as an experimental drug since 2013.”

Antibiotics are another potential route. The Redhill Biopharma is conducting a phase III clinical study of antibiotics designed to fight MAP in Crohn’s patients. Redhill is expected to release some of their results in April.

A Crohn’s MAP vaccine trial is underway in healthy volunteers:

Mechanism of action: The vaccine is what is called a ‘T-cell’ vaccine. T-cells are a type of white blood cell -an important player in the immune system- in particular, for fighting against organisms that hide INSIDE the body’s cells –like MAP does. Many people are exposed to MAP but most don’t get Crohn’s –Why? Because their T-cells can ‘see’ and destroy MAP. In those who do get Crohn’s, the immune system has a ‘blind spot’ –their T-cells cannot see MAP. The vaccine works by UN-BLINDING the immune system to MAP, reversing the immune dysregulation and programming the body’s own T-cells to seek out and destroy cells containing MAP. For general information, there are two informative videos about T Cells and the immune system below.

Efficacy: In extensive tests in animals (in mice and in cattle), 2 shots of the vaccine spaced 8 weeks apart proved to be a powerful, long-lasting stimulant of immunity against MAP. To read the published data from the trial in mice, click here. To read the published data from the trial in cattle, click here.

Before: Fistula in the intestines, 31 year old Crohn’s patient–Dr Borody, Combining infliximab, anti-MAP and hyperbaric oxygen therapy for resistant fistulizing Crohn’s disease

Dr. Borody (who was influential in the discovery that ulcers are caused by the h. pylori bacteria and not stress,) has had amazing success treating Crohn’s patients with a combination of infliximab, anti-MAP antibiotics, and hyperbaric oxygen. Here are two of his before and after photos of the intestines of a 31 yr old Crohn’s sufferer:

Here are some more interesting articles on the subject:

Sources: Is Crohn’s Disease caused by a Mycobacterium? Comparisons with Tuberculosis, Leprosy, and Johne’s Disease.

What is MAP?

Researcher Finds Possible link Between Cattle and Human Diseases:

Last week, Davis and colleagues in the U.S. and India published a case report in Frontiers of Medicine http://journal.frontiersin.org/article/10.3389/fmed.2016.00049/full . The report described a single patient, clearly infected with MAP, with the classic features of Johne’s disease in cattle, including the massive shedding of MAP in his feces. The patient was also ill with clinical features that were indistinguishable from the clinical features of Crohn’s. In this case though, a novel treatment approach cleared the patient’s infection.

The patient was treated with antibiotics known to be effective for tuberculosis, which then eliminated the clinical symptoms of Crohn’s disease, too.

After: The same intestines, now healed

Psychology Today: Treating Crohn’s Disease:

Through luck, hard work, good fortune, perseverance, and wonderful doctors, I seem to be one of the few people in the world who can claim to be “cured” of Crohn’s Disease. … In brief, I was treated for 6 years with medications normally used for multidrug resistant TB and leprosy, under the theory that a particular germ causes Crohn’s Disease. I got well, and have been entirely well since 2004. I do not follow a particular diet, and my recent colonoscopies and blood work have shown that I have no inflammation. The rest of these 3 blogs will explain more of the story.

What about removing Johne’s disease from the food supply? Assuming Johne’s is the culprit, this may be hard to do, (it’s pretty contagious in cattle, can lie dormant for years, and survives cooking) but drinking ultrapasteurized milk may be protective, especially for people who are susceptible to the disease.

***

However… there are also studies that contradict the MAP theory. For example, a recent study of the rate of Crohn’s disease in people exposed to Johne’s disease found no correllation. (However, Crohn’s is a pretty rare condition, and the survey only found 7 total cases, which is small enough that random chance could be a factor, but we are talking about people who probably got very up close and personal with feces infected with MAP.)

Another study found a negative correlation between Crohn’s and milk consumption:

Logistic regression showed no significant association with measures of potential contamination of water sources with MAP, water intake, or water treatment. Multivariate analysis showed that consumption of pasteurized milk (per kg/month: odds ratio (OR) = 0.82, 95% confidence interval (CI): 0.69, 0.97) was associated with a reduced risk of Crohn’s disease. Meat intake (per kg/month: OR = 1.40, 95% CI: 1.17, 1.67) was associated with a significantly increased risk of Crohn’s disease, whereas fruit consumption (per kg/month: OR = 0.78, 95% CI: 0.67, 0.92) was associated with reduced risk.

So even if Crohn’s is caused by MAP or something similar, it appears that people aren’t catching it from milk.

There are other theories about what causes Crohn’s–these folks, for example, think it’s related to consumption of GMO corn. Perhaps MAP has only been found in the intestines of Crohn’s patients because people with Crohn’s are really bad at fighting off infections. Perhaps the whole thing is caused by weird gut bacteria, or not enough parasites, insufficient Vitamin D, or industrial pollution.

The condition remains very much a mystery.

Weight, Taste, and Politics: A Theory of Republican Over-Indulgence

So I was thinking about taste (flavor) and disgust (emotion.)

As I mentioned about a month ago, 25% of people are “supertasters,” that is, better at tasting than the other 75% of people. Supertasters experience flavors more intensely than ordinary tasters, resulting in a preference for “bland” food (food with too much flavor is “overwhelming” to them.) They also have a more difficult time getting used to new foods.

One of my work acquaintances of many years –we’ll call her Echo–is obese, constantly on a diet, and constantly eats sweets. She knows she should eat vegetables and tries to do so, but finds them bitter and unpleasant, and so the general outcome is as you expect: she doesn’t eat them.

Since I find most vegetables quite tasty, I find this attitude very strange–but I am willing to admit that I may be the one with unusual attitudes toward food.

Echo is also quite conservative.

This got me thinking about vegetarians vs. people who think vegetarians are crazy. Why (aside from novelty of the idea) should vegetarians be liberals? Why aren’t vegetarians just people who happen to really like vegetables?

What if there were something in preference for vegetables themselves that correlated with political ideology?

Certainly we can theorize that “supertaster” => “vegetables taste bitter” => “dislike of vegetables” => “thinks vegetarians are crazy.” (Some supertasters might think meat tastes bad, but anecdotal evidence doesn’t support this; see also Wikipedia, where supertasting is clearly associated with responses to plants:

Any evolutionary advantage to supertasting is unclear. In some environments, heightened taste response, particularly to bitterness, would represent an important advantage in avoiding potentially toxic plant alkaloids. In other environments, increased response to bitterness may have limited the range of palatable foods. …

Although individual food preference for supertasters cannot be typified, documented examples for either lessened preference or consumption include:

Mushrooms? Echo was just complaining about mushrooms.

Let’s talk about disgust. Disgust is an important reaction to things that might infect or poison you, triggering reactions from scrunching up your face to vomiting (ie, expelling the poison.) We process disgust in our amygdalas, and some people appear to have bigger or smaller amygdalas than others, with the result that the folks with more amygdalas feel more disgust.

Humans also route a variety of social situations through their amygdalas, resulting in the feeling of “disgust” in response to things that are not rotten food, like other people’s sexual behaviors, criminals, or particularly unattractive people. People with larger amygdalas also tend to find more human behaviors disgusting, and this disgust correlates with social conservatism.

To what extent are “taste” and “disgust” independent of each other? I don’t know; perhaps they are intimately linked into a single feedback system, where disgust and taste sensitivity cause each other, or perhaps they are relatively independent, so that a few unlucky people are both super-sensitive to taste and easily disgusted.

People who find other people’s behavior disgusting and off-putting may also be people who find flavors overwhelming, prefer bland or sweet foods over bitter ones, think vegetables are icky, vegetarians are crazy, and struggle to stay on diets.

What’s that, you say, I’ve just constructed a just-so story?

Well, this is the part where I go looking for evidence. It turns out that obesity and political orientation do correlate:

Michael Shin and William McCarthy, researchers from UCLA, have found an association between counties with higher levels of support for the 2012 Republican presidential candidate and higher levels of obesity in those counties.

Shin and McCarthy's map of obesity vs. political orientation
Shin and McCarthy’s map of obesity vs. political orientation

Looks like the Mormons and Southern blacks are outliers.

(I don’t really like maps like this for displaying data; I would much prefer a simple graph showing orientation on one axis and obesity on the other, with each county as a datapoint.)

(Unsurprisingly, the first 49 hits I got when searching for correlations between political orientation and obesity were almost all about what other people think of fat people, not what fat people think. This is probably because researchers tend to be skinny people who want to fight “fat phobia” but aren’t actually interested in the opinions of fat people.)

The 15 most caffeinated cities, from I love Coffee
The 15 most caffeinated cities, from I love Coffee–note that Phoenix is #7, not #1.

Disgust also correlates with political belief, but we already knew that.

A not entirely scientific survey also indicates that liberals seem to like vegetables better than conservatives:

  • Liberals are 28 percent more likely than conservatives to eat fresh fruit daily, and 17 percent more likely to eat toast or a bagel in the morning, while conservatives are 20 percent more likely to skip breakfast.
  • Ten percent of liberals surveyed indicated they are vegetarians, compared with 3 percent of conservatives.
  • Liberals are 28 percent more likely than conservatives to enjoy beer, with 60 percent of liberals indicating they like beer.

(See above where Wikipedia noted that supertasters dislike beer.) I will also note that coffee, which supertasters tend to dislike because it is too bitter, is very popular in the ultra-liberal cities of Portland and Seattle, whereas heavily sweetened iced tea is practically the official beverage of the South.

The only remaining question is if supertasters are conservative. That may take some research.

Update: I have not found, to my disappointment, a simple study that just looks at correlation between ideology and supertasting (or nontasting.) However, I have found a couple of useful items.

In Verbal priming and taste sensitivity make moral transgressions gross, Herz writes:

Standard tests of disgust sensitivity, a questionnaire developed for this research assessing different types of moral transgressions (nonvisceral, implied-visceral, visceral) with the terms “angry” and “grossed-out,” and a taste sensitivity test of 6-n-propylthiouracil (PROP) were administered to 102 participants. [PROP is commonly used to test for “supertasters.”] Results confirmed past findings that the more sensitive to PROP a participant was the more disgusted they were by visceral, but not moral, disgust elicitors. Importantly, the findings newly revealed that taste sensitivity had no bearing on evaluations of moral transgressions, regardless of their visceral nature, when “angry” was the emotion primed. However, when “grossed-out” was primed for evaluating moral violations, the more intense PROP tasted to a participant the more “grossed-out” they were by all transgressions. Women were generally more disgust sensitive and morally condemning than men, … The present findings support the proposition that moral and visceral disgust do not share a common oral origin, but show that linguistic priming can transform a moral transgression into a viscerally repulsive event and that susceptibility to this priming varies as a function of an individual’s sensitivity to the origins of visceral disgust—bitter taste. [bold mine.]

In other words, supertasters are more easily disgusted, and with verbal priming will transfer that disgust to moral transgressions. (And easily disgusted people tend to be conservatives.)

The Effect of Calorie Information on Consumers’ Food Choice: Sources of Observed Gender Heterogeneity, by Heiman and Lowengart, states:

While previous studies found that inherited taste-blindness to bitter compounds such
as PROP may be a risk factor for obesity, this literature has been hotly disputed
(Keller et al. 2010).

(Always remember, of course, that a great many social-science studies ultimately do not replicate.)

I’ll let you know if I find anything else.

Southpaw Genetics

Warning: Totally speculative

This is an attempt at a coherent explanation for why left-handedness (and right-handedness) exist in the distributions that they do.

Handedness is a rather exceptional human trait. Most animals don’t have a dominant hand (or foot.) Horses have no dominant hooves; anteaters dig equally well with both paws; dolphins don’t favor one flipper over the other; monkeys don’t fall out of trees if they try to grab a branch with their left hands. Only humans have a really distinct tendency to use one side of their bodies over the other.

And about 90% of us use our right hands, and about 10% of us use our left hands, (Wikipedia claims 10%, but The Lopsided Ape reports 12%.) an observation that appears to hold pretty consistently throughout both time and culture, so long as we aren’t dealing with a culture where lefties are forced to write with their right hands.

A simple Mendel-square two-gene explanation for handedness–a dominant allele for right-handedness and a recessive one for left-handedness, with equal proportions of alleles in society, would result in a 75% righties to 25% lefties. Even if the proportions weren’t equal, the offspring of two lefties ought to be 100% left-handed. This is not, however, what we see. The children of two lefties have only a 25% chance or so of being left-handed themselves.

So let’s try a more complicated model.

Let’s assume that there are two alleles that code for right-handedness. (Hereafter “R”) You get one from your mom and one from your dad.

Each of these alleles is accompanied by a second allele that codes for either nothing (hereafter “O”) or potentially switches the expression of your handedness (hereafter “S”)

Everybody in the world gets two identical R alleles, one from mom and one from dad.

Everyone also gets two S or O alleles, one from mom and one from dad. One of these S or O alleles affects one of your Rs, and the other affects the other R.

Your potential pairs, then, are:

RO/RO, RO/RS, RS/RO, or RS/RS

RO=right handed allele.

RS=50% chance of expressing for right or left dominance; RS/RS thus => 25% chance of both alleles coming out lefty.

So RO/RO, RO/RS, and RS/RO = righties, (but the RO/ROs may have especially dominant right hands; half of the RO/RS guys may have weakly dominant right hands.)

Only RS/RS produces lefties, and of those, only 25% defeat the dominance odds.

This gets us our observed correlation of only 25% of children of left-handed couples being left-handed themselves.

(Please note that this is still a very simplified model; Wikipedia claims that there may be more than 40 alleles involved.)

What of the general population as a whole?

Assuming random mating in a population with equal quantities of RO/RO, RO/RS, RS/RO and RS/RS, we’d end up with 25% of children RS/RS. But if only 25% of RS/RS turn out lefties, only 6.25% of children would be lefties. We’re still missing 4-6% of the population.

This implies that either: A. Wikipedia has the wrong #s for % of children of lefties who are left-handed; B. about half of lefties are RO/RS (about 1/8th of the RO/RS population); C. RS is found in twice the proportion as RO in the population; or D. my model is wrong.

According to Anything Left-Handed:

Dr Chris McManus reported in his book Right Hand, Left Hand on a study he had done based on a review of scientific literature which showed parent handedness for 70,000 children. On average, the chances of two right-handed parents having a left-handed child were around 9% left-handed children, two left-handed parents around 26% and one left and one right-handed parent around 19%. …
More than 50% of left-handers do not know of any other left-hander anywhere in their living family.

This implies B, that about half of lefties are RO/RS. Having one RS combination gives you a 12.5% chance of being left-handed; having two RS combinations gives you a 25% chance.

And that… I think that works. And it means we can refine our theory–we don’t need two R alleles; we only need one. (Obviously it is more likely a whole bunch of alleles that code for a whole system, but since they act together, we can model them as one.) The R allele is then modified by a pair of alleles that comes in either O (do nothing,) or S (switch.)

One S allele gives you a 12.5% chance of being a lefty; two doubles your chances to 25%.

Interestingly, this model suggests that not only does no gene for “left handedness” exist, but that “left handedness” might not even be the allele’s goal. Despite the rarity of lefties, the S allele is found in 75% of the population (an equal % as the O allele.) My suspicion is that the S allele is doing something else valuable, like making sure we don’t become too lopsided in our abilities or try to shunt all of our mental functions to one side of our brain.

The hominin braid

Much has been said ’round the HBD-osphere, lately, on the age of the Pygmy (and Bushmen?)/everyone else split. Greg Cochran of West Hunter, for example, supports a split around 300,000 years ago–100,000 years before the supposed emergence of “anatomically modern humans” aka AMH aka Homo sapiens sapiens:

A number of varieties of Homo are grouped into the broad category of archaic humans in the period beginning 500,000 years ago (or 500ka). It typically includes Homo neanderthalensis (40ka-300ka), Homo rhodesiensis (125ka-300ka), Homo heidelbergensis (200ka-600ka), and may also include Homo antecessor (800ka-1200ka).[1] This category is contrasted with anatomically modern humans, which include Homo sapiens sapiens and Homo sapiens idaltu. (source)

According to genetic and fossil evidence, archaic Homo sapiens evolved to anatomically modern humans solely in Africa, between 200,000 and 100,000 years ago, with members of one branch leaving Africa by 60,000 years ago and over time replacing earlier human populations such as Neanderthals and Homo erectus. (source)

The last steps taken by the anatomically modern humans before becoming the current Homo sapiens, known as “behaviourally modern humans“, were taken either abruptly circa 40-50,000 years ago,[11] or gradually, and led to the achievement of a suite of behavioral and cognitive traits that distinguishes us from merely anatomically modern humans, hominins, and other primates. (source)

Cochran argues:

They’ve managed to sequence a bit of autosomal DNA from the Atapuerca skeletons, about 430,000 years old, confirming that they are on the Neanderthal branch.

Among other things, this supports the slow mutation rate, one compatible with what we see in modern family trios, but also with the fossil record.

This means that the Pygmies, and probably the Bushmen also, split off from the rest of the human race about 300,000 years ago. Call them Paleoafricans.

Personally, I don’t think the Pygmies are that old. Why? Call it intuition; it just seems more likely that they aren’t. Of course, there are a lot of guys out there whose intuition told them those rocks couldn’t possibly be more than 6,000 years old; I recognize that intuition isn’t always a great guide. It’s just the one I’ve got.

Picture 1( <– Actually, my intuition is based partially on my potentially flawed understanding of Haak’s graph, which I read as indicating that Pygmies split off quite recently.)

The thing about speciation (especially of extinct species we know only from their bones) is that it is not really as exact as we’d like it to be. A lot of people think the standard is “can these animals interbreed?” but dogs, coyotes, and wolves can all interbreed. Humans and Neanderthals interbred; the African forest elephant and African bush elephant were long thought to be the same species because they interbreed in zoos, but have been re-categorized into separate species because in the wild, their ranges don’t overlap and so they wouldn’t interbreed without humans moving them around. And now they’re telling us that the Brontosaurus was a dinosaur after all, but Pluto still isn’t a planet.

This is a tree
This is a tree

The distinction between archaic homo sapiens and homo sapiens sapiens is based partly on morphology (look at those brow ridges!) and partly on the urge to draw a line somewhere. If HSS could interbreed with Neanderthals, from whom they were separated by a good 500,000 years, there’s no doubt we moderns could interbreed with AHS from 200,000 years ago. (There’d be a fertility hit, just as pairings between disparate groups of modern HSS take fertility hits, but probably nothing too major–probably not as bad as an Rh- woman x Rh+ man, which we consider normal.)

bones sported by time
bones sported by time

So I don’t think Cochran is being unreasonable. It’s just not what my gut instinct tells me. I’ll be happy to admit I was wrong if I am.

The dominant model of human (and other) evolution has long been the tree (just as we model our own families.) Trees are easy to draw and easy to understand. The only drawback is that it’s not always clear exactly clear where a particular skull should be placed on our trees (or if the skull we have is even representative of their species–the first Neanderthal bones we uncovered actually hailed from an individual who had suffered from arthritis, resulting in decades of misunderstanding of Neanderthal morphology. (Consider, for sympathy, the difficulties of an alien anthropologist if they were handed a modern pygmy skeleton, 4’11”, and a Dinka skeleton, 5’11”, and asked to sort them by species.)

blob chart
blob chart

What we really have are a bunch of bones, and we try to sort them out by time and place, and see if we can figure out which ones belong to separate species. We do our best given what we have, but it’d be easier if we had a few thousand more ancient hominin bones.

The fact that different “species” can interbreed complicates the tree model, because branches do not normally split off and then fuse with other branches, at least not on real trees. These days, it’s looking more like a lattice model–but this probably overstates the amount of crossing. Aboriginal Australians, for example, were almost completely isolated for about 40,000 years, with (IIRC) only one known instance of genetic introgression that happened about 11,000 years ago when some folks from India washed up on the northern shore. The Native Americans haven’t been as isolated, because there appear to have been multiple waves of people that crossed the Bering Strait or otherwise made it into the Americas, but we are still probably talking about only a handful of groups over the course of 40,000 years.

Trellis model
Trellis model

Still, the mixing is there; as our ability to suss out genetic differences become better, we’re likely to keep turning up new incidences.

So what happens when we get deep into the 200,000 year origins of humanity? I suspect–though I could be completely wrong!–that things near the origins get murkier, not less. The tree model suggests that the original group hominins at the base of the “human” tree would be less genetically diverse than than the scattered spectrum of humanity we have today, but these folks may have had a great deal of genetic diversity among themselves due to having recently mated with other human species (many of which we haven’t even found, yet.) And those species themselves had crossed with other species. For example, we know that Melanesians have a decent chunk of Denisovan DNA (and almost no one outside of Melanesia has this, with a few exceptions,) and the Denisovans show evidence that they had even older DNA introgressed from a previous hominin species they had mated with. So you can imagine the many layers of introgression you could get with a part Melanesian person with some Denisovan with some of this other DNA… As we look back in time toward our own origins, we may see similarly a great variety of very disparate DNA that has, in essence, hitch-hiked down the years from older species, but has nothing to do with the timing of the split of modern groups.

As always, I am speculating.

Do small families lead to higher IQ?

Okay, so this is just me thinking (and mathing) out loud. Suppose we have two different groups (A and B) of 100 people each (arbitrary number chosen for ease of dividing.) In Group A, people are lumped into 5 large “clans” of 20 people each. In Group B, people are lumped in 20 small clans of 5 people each.

Each society has an average IQ of 100–ten people with 80IQs, ten people with 120IQs, and eighty people with 100IQs. I assume that there is slight but not absolute assortative mating, so that most high-IQ and low-IQ people end up marrying someone average.

IQ pairings:

100/100    100/80    100/120    80/80    120/120 (IQ)

30                 9                9                 1               1            (couples)

Okay, so there should be thirty couples where both partners have 100IQs, nine 100/80IQ couples, nine 100/120IQ couples, one 80/80IQ couple, and one 120/120IQ couple.

If each couple has 2 kids, distributed thusly:

100/100=> 10% 80, 10% 120, and 80% 100

120/120=> 100% 120

80/80 => 100% 80

120/100=> 100% 110

80/100 => 100% 90

Then we’ll end up with eight 80IQ kids, eighteen 90IQ, forty-eight 100IQ, eighteen 110 IQ, and 8 120IQ.

So, under pretty much perfect and totally arbitrary conditions that probably only vaguely approximate how genetics actually works (also, we are ignoring the influence of random chance on the grounds that it is random and therefore evens out over the long-term,) our population approaches a normal bell-curved IQ distribution.

Third gen:

80/80  80/90  80/100  90/90  90/100  90/110  100/100  100/110  100/120  110/110  110/120  120/120

1             2            5             4            9             2              6                9               5              4             2             1

2 80         4 85      10 90      8 90     18 95      4 100       1,4,1       18 105     10 110        8 110       4 115        2 120

3 80, 4 85, 18 90, 18 95, 8 100, 18 105, 18 110, 4 115, and 3 120. For simplicity’s sake:

7 80IQ, 18 90IQ, 44 100IQ, 18 110IQ, and 7 120IQ.

Not bad for a very, very rough model that is trying to keep the math very simple so I can write it blog post window instead of paper, though clearly 6 children have gotten lost somewhere. (rounding error???)

Anyway, now let’s assume that we don’t have a 2-child policy in place, but that being smart (or dumb) does something to your reproductive chances.

In the simplest model, people with 80IQs have zero children, 90s have one child, 100s have 2 children, 110s have 3 children, and 120s have 4 children.

oh god but the couples are crossed so do I take the average or the top IQ? I guess I’ll take average.

Gen 2:

100/100    100/80    100/120    80/80    120/120 (IQ)

30                 9                9                 1               1            (couples)

60 kids        9 kids       27 kids       0              4 kids

6, 48, 6

So our new distribution is six 80IQ, nine 90IQ, forty-eight 100IQ, twenty-seven 110IQ, and ten 120IQ.

(checks math oh good it adds up to 100.)

We’re not going to run gen three, as obviously the trend will continue.

Let’s go back to our original clans. Society A has 5 clans of 20 people each; Society B has 20 clans of 5 people each.

With 10 high-IQ and 10 low-IQ people per society, each clan in A is likely to have 2 smart and 2 dumb people. Each clan in B, by contrast, is likely to have only 1 smart or 1 dumb person. For our model, each clan will be the reproductive unit rather than each couple, and we’ll take the average IQ of each clan.

Society A: 5 clans with average of 100 IQ => social stasis.

Society B: 20 clans, 10 with average of 96, 10 with average of 106. Not a big difference, but if the 106s have even just a few more children over the generations than the 96s, they will gradually increase as a % of the population.

Of course, over the generations, a few of our 5-person clans will get two smart people (average IQ 108), a dumb and a smart (average 100), and two dumb (92.) The 108 clans will do very well for themselves, and the 92 clans will do very badly.

Speculative conclusions:

If society functions so that smart people have more offspring than dumb people (definitely not a given in the real world,) then: In society A, everyone benefits from the smart people, whose brains uplift their entire extended families (large clans.) This helps everyone, especially the least capable, who otherwise could not have provided for themselves. However, the average IQ in society A doesn’t move much, because you are likely to have equal numbers of dumb and smart people in each family, balancing each other out. In Society B, the smart people are still helping their families, but since their families are smaller, random chance dictates that they are less likely to have a dumb person in their families. The families with the misfortune to have a dumb member suffer and have fewer children as a result; the families with the good fortune to have a smart member benefit and have more children as a result. Society B has more suffering, but also evolves to have a higher average IQ. Society A has less suffering, but its IQ does not change. Obviously this a thought experiment and should not be taken as proof of anything about real world genetics. But my suspicion is that this is basically the mechanism behind the evolution of high-IQ in areas with long histories of nuclear, atomized families, and the mechanism suppressing IQ in areas with strongly tribal norms. (See HBD Chick for everything family structure related.)

 

 

Corporations and the Litigious Environment that is Destroying America

I’ve been thinking about whether we should quit creating various forms of corporations–like LLCs–for for the past 15 years or so–ever since Bakunin, more or less. But other than the fraud post a few days ago, I think the only other piece I’ve really written on the subject was a short explanation of my opposition to letting corporations have any kind of political rights (eg, donating to campaigns, freedom of speech,) on the grounds that they are non-human organisms (they are meta-human organisms,) and since I am a human and rather speciesist, I don’t want non-humans getting power.

The problem with discussing whether corporations should exist (or in what form, or if they are good or bad,) is that people are prone to status-quo fallacies where they forget that  corporations are just legal fictions and act instead as though they were real, physical objects or forces of nature created by the Will of God, like mountain ranges or entropy.

But a “corporation” is not so much a big building full of people, but a piece of paper in your filing cabinet. Modern corporate structures did not exist throughout most of humanity’s 200,000 year existence, and in fact only came to exist when governments passed laws that created them.

All that takes to change them is a new law. Unlike mountains, they only “exist” because a law (and pieces of paper tucked away in filing cabinets,) says they do. What man has made, man can unmake.

So let’s talk about lawsuits.

America is a litigious society. Extremely litigious. Probably the most litigious in the world. (We also incarcerate a higher % of our people than any other country, though on the bright side, we summarily execute far fewer.)

Sometimes I think Americans are the kinds of people who solve disputes by punching each other, but we’ve gotten it into heads that lawsuits are a kind of punching.

At any rate, fear of litigation and liability are ruining everything. If you don’t believe me, try setting up a roadside stand to sell some extra radishes from your garden or build a bridge over a creek on your own property. You have to pass a background check just to help out on your kid’s school field trip, and children aren’t allowed to ride their bikes in my neighborhood because, “if they got hit by a car, the HOA could get sued.” As farmer Joel Salatin put it, “Everything I Want to do is Illegal.” (All Joel wants to do is grow and sell food, but there are SO MANY REGULATIONS.)

100 years ago, the kind of litigation people are afraid of simply wouldn’t have happened. For example, as Stanford Mag recounts of campus violence around 1910:

Black eyes, bruises, and occasional bouts of unconsciousness didn’t seem to alarm the administration. … Farm life came with a brutish edge. Some freshmen slept in armed groups to ward off hazers, a state of affairs apparently enabled by the administration’s reluctance to meddle. “Persons fit to be in college are fit to look after their own affairs,” Stanford President David Star Jordan said.

Fast forward a century to MIT getting sued by the parents of a student who killed herself:

Elizabeth Shin (February 16, 1980 – April 14, 2000) was a Massachusetts Institute of Technology student who died from burns inflicted by a fire in her dormitory room. Her death led to a lawsuit against MIT and controversy as to whether MIT paid adequate attention to its students’ mental and emotional health, and whether MIT’s suicide rate was abnormally high.

… After the incident, MIT announced an upgrade of its student counseling programs, including more staff members and longer hours. However, the Shins claimed these measures were not enough and filed a $27.65 million lawsuit against MIT, administrators, campus police officers, and its mental health employees. …

On April 3, 2006, MIT announced that the case with the family of Elizabeth Shin had been settled before trial for an undisclosed amount.[7]

Universities, of course, do not want to get sued for millions of dollars and deal with the attendant bad publicity, but these days you can’t say “Boo” on campus without someone thinking it’s the administration’s job to protect the students from emotional distress.

All of this litigation has happened (among other reasons) because corporations are seen (by juries) as cash cows.

Let’s pause a moment to discuss exactly what an LLC is (besides a piece of paper.) What’s the difference between selling your extra radishes as yourself and selling your extra radishes as a corporation? If you are selling as yourself, and one of your radishes makes a customer ill and they sue you, then you can be held personally liable for their sickness and be forced to pay their $10 million medical bill yourself, driving you into bankruptcy and ruin. But if you are selling as a corporation, then your ill customer must sue the corporation. The corporation can be found liable and forced to cover the $10 million bill, but you, the owner, are not liable; your money (the income you’ve made over the years by selling radishes) is safe.

(There are some tax-related differences, as well, but we will skip over those for now.)

There are doubtless many other varieties of corporations, most of which I am not familiar because I am not a specialist in corporate law. The general principle of most, if not all corporations is that they exist independent of the people in them.

This is how Donald Trump’s businesses can have gone bankrupt umpteen times and he can still have billions of dollars.

But precisely because corporations are not people, and the people who own them are protected (supposedly) from harm, people are, I suspect more likely to sue them and juries are to award suits against them.

As a lawyer I spoke with put it, he was glad that his job only involved suing corporations, because “corporations aren’t people, so I’m not hurting anyone.”

Suppose MIT were just a guy named Mit who taught math and physics. If one of his students happened to commit suicide, would anyone sue him on the grounds that he didn’t do enough to stop her?

I doubt it. For starters, Mit wouldn’t even have millions of dollars to sue for.

When people get hurt, juries want to do something to help them. Sick people have bills that must get paid one way or another, after all. Corporations have plenty of money (or so people generally think,) but individuals don’t. A jury would hesitate to drive Mit into poverty, as that would harm him severely, but wouldn’t blink an eye at making MIT pay millions, as this hurts “no one” since MIT is not a person.

You might say that it is kind of like a war between human organisms and corporate organisms–humans try to profit off corporations, and corporations try to profit off humans. (Of course, I tend to favor humanity in this grand struggle.)

The big problem with this system is that even though corporations aren’t people, they are still composed of people. A corporation that does well can employ lots of people and make their lives better, but a corporation that gets sued into the gutter won’t be able to employ anyone at all. The more corporations have to fear getting sued, the more careful they have to be–which results in increased paperwork, record keeping, policies-on-everything, lack of individual discretion, etc., which in turn make corporations intolerable both for the people in them and the people in them.

So what can we do?

The obvious solution of letting corporations get away with anything probably isn’t a good idea, because corporations will eat people if eating people leads to higher profits. (And as a person, I am opposed to the eating of people.)

Under our current system, protection from liability lets owners get away with cheating already–take mining corporations, which are known for extracting the resources from an area, paying their owners handsomely, and then conveniently declaring bankruptcy just before costly environmental cleanup begins. Local communities are left to foot the bill (and deal with the health effects like lead poisoning and cancer.)

The solution, IMO, is individual responsibility wherever possible. Mining companies could not fob off their cleanup costs if the owners were held liable for the costs. A few owners losing everything and ending up penniless would quickly prompt the owners of other mining companies to be very careful about how they construct their waste water ponds.

People need to interact with and be responsible to other people.

 

I’m probably wrong!

When trying to learn and understand approximately everything, one is forced to periodically admit that there are a great many things one does not yet know.

I made a diagram of my thoughts from yesterday:

humantreebasedonHaakMy intuition tells me this is wrong.

Abbreviations: SSA =  Sub-Saharan Africa; ANE = Ancient North Eurasian, even though they’re found all over the place; WHG = European hunter-gatherers; I-Es = Indo-Europeans.

I tried my best to make it neat and clear, focusing on the big separations and leaving out the frequent cross-mixing. Where several groups had similar DNA, I used one group to represent the group (eg, Yoruba,) and left out groups whose histories were just too complicated to express clearly at this size. A big chunk of the Middle East/middle of Eurasia is a mixing zone where lots of groups seem to have merged. (Likewise, I obviously left out groups that weren’t in Haak’s dataset, like Polynesians.)

I tried to arrange the groups sensibly, so that ones that are geographically near each other and/or have intermixed are near each other on the graph, but this didn’t always work out–eg, the Inuit share some DNA with other Native American groups, but ended up sandwiched between India and Siberia.

Things get complicated around the emergence of the Indo-Europeans (I-Es), who emerged from the combination of a known population (WHG) and an unknown population that I’m super-speculating might have come from India, after which some of the I-Es might have returned to India. But then there is the mystery of why the color on the graph changes from light green to teal–did another group related to the original IEs emerge, or is this just change over time?

The IEs are also, IMO, at the wrong spot in time (so are the Pygmies.) Maybe this is just a really bad proxy for time? Maybe getting conquered makes groups combine in ways that look like they differentiated at times other than when they did?

Either way, I am, well, frustrated.

EDIT: Oh, I just realized something I did wrong.

*Fiddles*

Still speculative, but hopefully better
Still speculative, but hopefully better

Among other things, I realized I’d messed up counting off where some of the groups split, so while I fixing that, I went ahead and switched the Siberians and Melanesians so I could get the Inuit near the other Americans.

I also realized that I was trying to smush together the emergence of the WHG and the Yamnaya, even though those events happened at different times. The new version shows the WHG and Yamnaya (proto-Indo-Europeans) at two very different times.

Third, I have fixed it so that the ANE don’t feed directly into modern Europeans. The downside of the current model is that it makes it look like the ANE disappeaed, when really they just dispersed into so many groups which mixed in turn with other groups that they ceased existing in “pure” form, though the Bedouins, I suspect, come closest.

The “light green” and “teal” colors on Haak’s graph are still problematic–light green doesn’t exist in “pure” form anywhere on the graph, but it appears to be highest in India. My interpretation is that the light green derived early on from an ANE population somewhere around India (though Iran, Pakistan, the Caucuses, or the Steppes are also possibilities,) and somewhat later mixed with an “East” population in India. A bit of that light green population also made it into the Onge, and later, I think a branch of it combined with the WHG to create the Yamnaya. (Who, in turn, conquered some ANE groups, creating the modern Europeans.)

I should also note that I might have the Khoi and San groups backwards, because I’m not all that familiar with them.

I could edit this post and just eliminate my embarrassing mistakes, but I think I’ll let them stay in order to show the importance of paying attention to the nagging sense of being wrong. It turns out I was! I might still be wrong, but hopefully I’m less wrong.

Absolute Monarchy, Revolution, and the Bourgeoisie

So I was thinking about the Russian Revolution (as is my wont,) and wondering why everyone was so vehemently against the bourgeoisie and not, at least in their rhetoric, the nobility. (I’ve long wondered the exact same thing about the French Revolution.)

If there is one thing that all commentators seem to agree on, including the man himself, it’s that Nicholas II (aka Nikolai Alexandrovich Romanov, final Tsar of all Russia,) was not fit to rule. He was not an evil man (though he did send millions of his subjects to their deaths,) and he was not an idiot, but neither was he extraordinary in any of the ways necessary to rule an empire.

But this isn’t reason to go executing a guy. After all, Russia managed to survive the tsardom of Peter the Great’s retarded half-brother (principally by making Peter co-tsar,) so there’s no particular reason why the nobility couldn’t have just stepped in and run things for Nicholas. Poor little Alexei probably wouldn’t have lasted much longer, and then one of Nicholas’s brothers or nephews would have been in the running for tsar–seems like a pretty decent position to hold out for.

But in an absolute monarchy, how much power does the nobility have? Could they intervene and change the direction of the war (or stop/prevent it altogether?)

Louis XIV (1638 – 1715) consolidated an absolute monarchy in France (with the height of his power around 1680.) In 1789, about 110 years later, the French Revolution broke out; in 1793, Louis XVII was executed.

Peter and Catherine the Greats (1672 – 1725; 1729 – 1796) consolidated monarchical power in Russia. The Russian Revolution broke out in 1905 and then more successfully in 1917; Nicholas was executed in 1918. Assuming Catherine was fairly powerful until her death, (and I suspect she likely would have been deposed had she not,) that gives us about 110 or 120 years between absolute monarch and revolution.

Is there a connection?

Obviously one possibility is just that folks who manage to make themselves absolute monarchs are rare indeed, and their descendents tend to regress toward normal personalities until they just aren’t politically savvy enough to hold onto power, at which point a vacuum occurs and a revolution fills it.

Revolutionaries, by and large, aren’t penniless peasants or factory workers (at least, not at the beginning.) They’re fairly idle intellectuals who have the time and resources to write lots of books and articles about revolution. Lenin was hanging out in Switzerland, writing, when the Russian Revolution broke out, not slogging through the trenches or working in a factory.

As I understand it, the consolidation of absolute monarchy requires taking power from the nobles. The nobles get their support from their personal peasants (their serfs.) The Royalty get their support against the nobles, therefore, from free men–middle class folks not bound to any particular noble. These middle-class folks tend to live in the city–they are the bourgeoisie.

Think of a ladder–or a cellular automata–with four rungs: royals, nobles, bourgeoisie, and peasants.

If the royalty and bourgeoisie are aligned, and the nobles and peasants are aligned, then this might explain why, when Russia and France decided to execute their monarchs, they simultaneously attacked the bourgeoisie–but said little, at least explicitly and propagandically, against the nobility.

By using the peasants to attack the bourgeoisie, the nobles attacked the king’s base of support, leaving him unable to defend himself and hang onto power. A strong monarch might be able to prevent such maneuvering, but a weak monarch can’t. Nicholas II doesn’t seem like the kind of person who’d imprison infant relatives for their whole lives or have his son tortured to death. He didn’t even bother taking another wife after the tsarina failed to produce a suitable heir.

I see the exact same dynamic happening today. For the peasants, we have America’s minority communities–mostly blacks and Hispanics–who are disproportionately poor. Working and middle-class whites are the bourgeoisie. College students and striving rich are the nobles, and the royalty are the rich.

Occupy Wall Street was an attempt by student-types to call direct attention to the wealth of the royalty, but never got widespread support. By contrast, student protests attacking bourgeois whites on behalf of black peasants have been getting tons of support; their ideas are now considered mainstream, while OWS’s are still fringe.

There’s a great irony in Ivy League kids lecturing anyone about their “privilege,” much like the irony in Lenin sitting on his butt in Switzerland while complaining about the bourgeoisie.

But in this case, is the students’ real target actually the rich?