Did tobacco become popular because it kills parasites?

While reading about the conditions in a Burmese prison around the turn of the previous century (The History and Romance of Crime: Oriental Prisons, by Arthur Griffiths)(not good) it occurred to me that there might have been some beneficial effect of the large amounts of tobacco smoke inside the prison. Sure, in the long run, tobacco is highly likely to give you cancer, but in the short run, is it noxious to fleas and other disease-bearing pests?

Meanwhile in Melanesia, (Pygmies and Papuans,) a group of ornithologists struggled up a river to reach an almost completely isolated tribe of Melanesians that barely practiced horticulture; even further up the mountain they met a band of pygmies (negritoes) whose existence had only been rumored of; the pygmies cultivated tobacco, which they traded with their otherwise not terribly interested in trading for worldy goods neighbors.

The homeless smoke at rates 3x higher than the rest of the population, though this might have something to do with the high correlation between schizophrenia and smoking–80% of schizophrenics smoke, compared to 20% of the general population. Obviously this correlation is best explained by tobacco’s well-noted psychological effects (including addiction,) but why is tobacco so ubiquitous in prisons that cigarettes are used as currency? Could they have, in unsanitary conditions, some healthful purpose?

From NPR: Pot For Parasites? Pygmy Men Smoke out Worms:

On average, the more THC byproduct that Hagen’s team found in an Aka man’s urine, the fewer worm eggs were present in his gut.

“The heaviest smokers, with everything else being equal, had about half the number of parasitic eggs in their stool, compared to everyone else,” Hagen says. …

THC — and nicotine — are known to kill intestinal worms in a Petri dish. And many worms make their way to the gut via the lungs. “The worms’ larval stage is in the lung,” Hagan says. “When you smoke you just blast them with THC or nicotine directly.”

Smithsonian reports that Birds Harness the Deadly Power of Nicotine to Poison Parasites:

Smoking kills. But if you’re a bird and if you want to kill parasites, that can be a good thing. City birds have taken to stuffing their nests with cigarette butts to poison potential parasites. Nature reports:

“In a study published today in Biology Letters, the researchers examined the nests of two bird species common on the North American continent. They measured the amount of cellulose acetate (a component of cigarette butts) in the nests, and found that the more there was, the fewer parasitic mites the nest contained.”

Out in the State of Nature, parasites are extremely common and difficult to get rid of (eg, hookworm elimination campaigns in the early 1900s found that 40% of school-aged children were infected); farmers can apparently use tobacco as a natural de-wormer (but be careful, as tobacco can be poisonous.)

In the pre-modern environment, when many people had neither shoes, toilets, nor purified water, parasites were very hard to avoid.
Befoundalive recommends eating the tobacco from a cigarette if you have intestinal parasites and no access to modern medicine.

Here’s a study comparing parasite rates in tobacco workers vs. prisoners in Ethiopia:

Overall, 8 intestinal parasite species have been recovered singly or in combinations from 146 (61.8 %) samples. The prevalence in prison population (88/121 = 72.7%) was significantly higher than that in tobacco farm (58/115 = 50.4%).

In vitro anthelmintic effect of Tobacco (Nicotiana tabacum) extract on parasitic nematode, Marshallagia marshalli reports:

Because of developing resistance to the existing anthelmintic drugs, there is a need for new anthelmintic agents. Tobacco plant has alkaloid materials that have antiparasitic effect. We investigated the in vitro anthelminthic effect of aqueous and alcoholic extract of Tobacco (Nicotiana tabacum) against M. marshalli. … Overall, extracts of Tobacco possess considerable anthelminthic activity and more potent effects were observed with the highest concentrations. Therefore, the in vivo study on Tobocco in animal models is recommended.

(Helminths are parasites; anthelmintic=anti-parasites.)

So it looks like, at least in the pre-sewers and toilets and clean water environment when people struggled to stay parasite free, tobacco (and certain other drugs) may have offered people an edge over the pests. (I’ve noticed many bitter or noxious plants seem to have been useful for occasionally flushing out parasites, but you certainly don’t want to be in a state of “flush” all the time.)

It looks like it was only when regular sanitation got good enough that we didn’t have to worry about parasites anymore that people started getting really concerned with tobacco’s long-term negative effects on humans.

Advertisements

Is Crohn’s Disease Tuberculosis of the Intestines?

Source: Rise in Crohn’s Disease admission rates, Glasgow

Crohn‘s is an inflammatory disease of the digestive tract involving diarrhea, vomiting internal lesions, pain, and severe weight loss. Left untreated, Crohn’s can lead to death through direct starvation/malnutrition, infections caused by the intestinal walls breaking down and spilling feces into the rest of the body, or a whole host of other horrible symptoms, like pyoderma gangrenosum–basically your skin just rotting off.

Crohn’s disease has no known cause and no cure, though several treatments have proven effective at putting it into remission–at least temporarily.

The disease appears to be triggered by a combination of environmental, bacterial, and genetic factors–about 70 genes have been identified so far that appear to contribute to an individual’s chance of developing Crohn’s, but no gene has been found yet that definitely triggers it. (The siblings of people who have Crohn’s are more likely than non-siblings to also have it, and identical twins of Crohn’s patients have a 55% chance of developing it.) A variety of environmental factors, such as living in a first world country, (parasites may be somewhat protective against the disease), smoking, or eating lots of animal protein also correlate with Crohn’s, but since only 3.2/1000 people even in the West have it’s, these obviously don’t trigger the disease in most people.

Crohn’s appears to be a kind of over-reaction of the immune system, though not specifically an auto-immune disorder, which suggests that a pathogen of some sort is probably involved. Most people are probably able to fight off this pathogen, but people with a variety of genetic issues may have more trouble–according to Wikipedia, “There is considerable overlap between susceptibility loci for IBD and mycobacterial infections.[62] ” Mycobacteria are a genus of of bacteria that includes species like tuberculosis and leprosy. A variety of bacteria–including specific strains of e coli, yersinia, listeria, and Mycobacterium avium subspecies paratuberculosis–are found in the intestines of Crohn’s suffers at higher rates than in the intestines of non-sufferers (intestines, of course, are full of all kinds of bacteria.)

Source: The Gutsy Group

Crohn’s treatment depends on the severity of the case and specific symptoms, but often includes a course of antibiotics, (especially if the patient has abscesses,) tube feeding (in acute cases where the sufferer is having trouble digesting food,) and long-term immune-system suppressants such as prednisone, methotrexate, or infliximab. In severe cases, damaged portions of the intestines may be cut out. Before the development of immunosuppressant treatments, sufferers often progressively lost more and more of their intestines, with predictably unpleasant results, like no longer having a functioning colon. (70% of Crohn’s sufferers eventually have surgery.)

A similar disease, Johne’s, infects cattle. Johne’s is caused by Mycobacterium avium subspecies paratuberculosis, (hereafter just MAP). MAP typically infects calves at birth, transmitted via infected feces from their mothers, incubates for two years, and then manifests as diarrhea, malnutrition, dehydration, wasting, starvation, and death. Luckily for cows, there’s a vaccine, though any infectious disease in a herd is a problem for farmers.

If you’re thinking that “paratuberculosis” sounds like “tuberculosis,” you’re correct. When scientists first isolated it, they thought the bacteria looked rather like tuberculosis, hence the name, “tuberculosis-like.” The scientists’ instincts were correct, and it turns out that MAP is in the same bacterial genus as tuberculosis and leprosy (though it may be more closely related to leprosy than TB.) (“Genus” is one step up from “species;” our species is “homo Sapiens;” our genus, homo, we share with homo Neanderthalis, homo Erectus, etc, but chimps and gorillas are not in the homo genus.)

A: Crohn’s Disease in Humans. Figure B: Johne’s Disease in Animals. Greenstein Lancet Infectious Disease, 2004, H/T Human Para Foundation

The intestines of cattle who have died of MAP look remarkably like the intestines of people suffering from advanced Crohn’s disease.

MAP can actually infect all sorts of mammals, not just cows, it’s just more common and problematic in cattle herds. (Sorry, we’re not getting through this post without photos of infected intestines.)

So here’s how it could work:

The MAP bacteria–possibly transmitted via milk or meat products–is fairly common and infects a variety of mammals. Most people who encounter it fight it off with no difficulty (or perhaps have a short bout of diarrhea and then recover.)

A few people, though, have genetic issues that make it harder for them to fight off the infection. For example, Crohn’s sufferers produce less intestinal mucus, which normally acts as a barrier between the intestines and all of the stuff in them.

Interestingly, parasite infections can increase intestinal mucus (some parasites feed on mucus), which in turn is protective against other forms of infection; decreasing parasite load can increase the chance of other intestinal infections.

Once MAP enters the intestinal walls, the immune system attempts to fight it off, but a genetic defect in microphagy results in the immune cells themselves getting infected. The body responds to the signs of infection by sending more immune cells to fight it, which subsequently also get infected with MAP, triggering the body to send even more immune cells. These lumps of infected cells become the characteristic ulcerations and lesions that mark Crohn’s disease and eventually leave the intestines riddled with inflamed tissue and holes.

The most effective treatments for Crohn’s, like Infliximab, don’t target infection but the immune system. They work by interrupting the immune system’s feedback cycle so that it stops sending more cells to the infected area, giving the already infected cells a chance to die. It doesn’t cure the disease, but it does give the intestines time to recover.

Unfortunately, this means infliximab raises your chance of developing TB:

There were 70 reported cases of tuberculosis after treatment with infliximab for a median of 12 weeks. In 48 patients, tuberculosis developed after three or fewer infusions. … Of the 70 reports, 64 were from countries with a low incidence of tuberculosis. The reported frequency of tuberculosis in association with infliximab therapy was much higher than the reported frequency of other opportunistic infections associated with this drug. In addition, the rate of reported cases of tuberculosis among patients treated with infliximab was higher than the available background rates.

because it is actively suppressing the immune system’s ability to fight diseases in the TB family.

Luckily, if you live in the first world and aren’t in prison, you’re unlikely to catch TB–only about 5-10% of the US population tests positive for TB, compared to 80% in many African and Asian countries. (In other words, increased immigration from these countries will absolutely put Crohn’s suffers at risk of dying.)

There are a fair number of similarities between Crohn’s, TB, and leprosy is that they are all very slow diseases that can take years to finally kill you. By contrast, other deadly diseases, like smallpox, cholera, and yersinia pestis (plague), spread and kill extremely quickly. Within about two weeks, you’ll definitely know if your plague infection is going to kill you or not, whereas you can have leprosy for 20 years before you even notice it.

TB, like Crohn’s, creates granulomas:

Tuberculosis is classified as one of the granulomatous inflammatory diseases. Macrophages, T lymphocytes, B lymphocytes, and fibroblasts aggregate to form granulomas, with lymphocytes surrounding the infected macrophages. When other macrophages attack the infected macrophage, they fuse together to form a giant multinucleated cell in the alveolar lumen. The granuloma may prevent dissemination of the mycobacteria and provide a local environment for interaction of cells of the immune system.[63] However, more recent evidence suggests that the bacteria use the granulomas to avoid destruction by the host’s immune system. … In many people, the infection waxes and wanes.

Crohn’s also waxes and wanes. Many sufferers experience flare ups of the disease, during which they may have to be hospitalized, tube fed, and put through another round of antibiotics or sectioning (surgical removal of the intestines) before they improve–until the disease flares up again.

Leprosy is also marked by lesions, though of course so are dozens of other diseases.

Note: Since Crohn’s is a complex, multi-factorial disease, there may be more than one bacteria or pathogen that could infect people and create similar results. Alternatively, Crohn’s sufferers may simply have intestines that are really bad at fighting off all sorts of diseases, as a side effect of Crohn’s, not a cause, resulting in a variety of unpleasant infections.

The MAP hypothesis suggests several possible treatment routes:

  1. Improving the intestinal mucus, perhaps via parasites or medicines derived from parasites
  2. Improving the intestinal microbe balance
  3. Antibiotics that treat Map
  4. Anti-MAP vaccine similar to the one for Johne’s disease in cattle
  5. Eliminate map from the food supply

Here’s an article about the parasites and Crohn’s:

To determine how the worms could be our frenemies, Cadwell and colleagues tested mice with the same genetic defect found in many people with Crohn’s disease. Mucus-secreting cells in the intestines malfunction in the animals, reducing the amount of mucus that protects the gut lining from harmful bacteria. Researchers have also detected a change in the rodents’ microbiome, the natural microbial community in their guts. The abundance of one microbe, an inflammation-inducing bacterium in the Bacteroides group, soars in the mice with the genetic defect.

The researchers found that feeding the rodents one type of intestinal worm restored their mucus-producing cells to normal. At the same time, levels of two inflammation indicators declined in the animals’ intestines. In addition, the bacterial lineup in the rodents’ guts shifted, the team reports online today in Science. Bacteroides’s numbers plunged, whereas the prevalence of species in a different microbial group, the Clostridiales, increased. A second species of worm also triggers similar changes in the mice’s intestines, the team confirmed.

To check whether helminths cause the same effects in people, the scientists compared two populations in Malaysia: urbanites living in Kuala Lumpur, who harbor few intestinal parasites, and members of an indigenous group, the Orang Asli, who live in a rural area where the worms are rife. A type of Bacteroides, the proinflammatory microbes, predominated in the residents of Kuala Lumpur. It was rarer among the Orang Asli, where a member of the Clostridiales group was plentiful. Treating the Orang Asli with drugs to kill their intestinal worms reversed this pattern, favoring Bacteroides species over Clostridiales species, the team documented.

This sounds unethical unless they were merely tagging along with another team of doctors who were de-worming the Orangs for normal health reasons and didn’t intend on potentially inflicting Crohn’s on people. Nevertheless, it’s an interesting study.

At any rate, so far they haven’t managed to produce an effective medicine from parasites, possibly in part because people think parasites are icky.

But if parasites aren’t disgusting enough for you, there’s always the option of directly changing the gut bacteria: fecal microbiota transplants (FMT).  A fecal transplant is exactly what it sounds like: you take the regular feces out of the patient and put in new, fresh feces from an uninfected donor. (When your other option is pooping into a bag for the rest of your life because your colon was removed, swallowing a few poop pills doesn’t sound so bad.) EG, Fecal microbiota transplant for refractory Crohn’s:

Approximately one-third of patients with Crohn’s disease do not respond to conventional treatments, and some experience significant adverse effects, such as serious infections and lymphoma, and many patients require surgery due to complications. .. Herein, we present a patient with Crohn’s colitis in whom biologic therapy failed previously, but clinical remission and endoscopic improvement was achieved after a single fecal microbiota transplantation infusion.

Here’s a Chinese doctor who appears to have good success with FMTs to treat Crohn’s–improvement in 87% of patients one month after treatment and remission in 77%, though the effects may wear off over time. Note: even infliximab, considered a “wonder drug” for its amazing abilities, only works for about 50-75% of patients, must be administered via regular IV infusions for life (or until it stops working,) costs about $20,000 a year per patient, and has some serious side effects, like cancer. If fecal transplants can get the same results, that’s pretty good.

Little known fact: “In the United States, the Food and Drug Administration (FDA) has regulated human feces as an experimental drug since 2013.”

Antibiotics are another potential route. The Redhill Biopharma is conducting a phase III clinical study of antibiotics designed to fight MAP in Crohn’s patients. Redhill is expected to release some of their results in April.

A Crohn’s MAP vaccine trial is underway in healthy volunteers:

Mechanism of action: The vaccine is what is called a ‘T-cell’ vaccine. T-cells are a type of white blood cell -an important player in the immune system- in particular, for fighting against organisms that hide INSIDE the body’s cells –like MAP does. Many people are exposed to MAP but most don’t get Crohn’s –Why? Because their T-cells can ‘see’ and destroy MAP. In those who do get Crohn’s, the immune system has a ‘blind spot’ –their T-cells cannot see MAP. The vaccine works by UN-BLINDING the immune system to MAP, reversing the immune dysregulation and programming the body’s own T-cells to seek out and destroy cells containing MAP. For general information, there are two informative videos about T Cells and the immune system below.

Efficacy: In extensive tests in animals (in mice and in cattle), 2 shots of the vaccine spaced 8 weeks apart proved to be a powerful, long-lasting stimulant of immunity against MAP. To read the published data from the trial in mice, click here. To read the published data from the trial in cattle, click here.

Before: Fistula in the intestines, 31 year old Crohn’s patient–Dr Borody, Combining infliximab, anti-MAP and hyperbaric oxygen therapy for resistant fistulizing Crohn’s disease

Dr. Borody (who was influential in the discovery that ulcers are caused by the h. pylori bacteria and not stress,) has had amazing success treating Crohn’s patients with a combination of infliximab, anti-MAP antibiotics, and hyperbaric oxygen. Here are two of his before and after photos of the intestines of a 31 yr old Crohn’s sufferer:

Here are some more interesting articles on the subject:

Sources: Is Crohn’s Disease caused by a Mycobacterium? Comparisons with Tuberculosis, Leprosy, and Johne’s Disease.

What is MAP?

Researcher Finds Possible link Between Cattle and Human Diseases:

Last week, Davis and colleagues in the U.S. and India published a case report in Frontiers of Medicine http://journal.frontiersin.org/article/10.3389/fmed.2016.00049/full . The report described a single patient, clearly infected with MAP, with the classic features of Johne’s disease in cattle, including the massive shedding of MAP in his feces. The patient was also ill with clinical features that were indistinguishable from the clinical features of Crohn’s. In this case though, a novel treatment approach cleared the patient’s infection.

The patient was treated with antibiotics known to be effective for tuberculosis, which then eliminated the clinical symptoms of Crohn’s disease, too.

After: The same intestines, now healed

Psychology Today: Treating Crohn’s Disease:

Through luck, hard work, good fortune, perseverance, and wonderful doctors, I seem to be one of the few people in the world who can claim to be “cured” of Crohn’s Disease. … In brief, I was treated for 6 years with medications normally used for multidrug resistant TB and leprosy, under the theory that a particular germ causes Crohn’s Disease. I got well, and have been entirely well since 2004. I do not follow a particular diet, and my recent colonoscopies and blood work have shown that I have no inflammation. The rest of these 3 blogs will explain more of the story.

What about removing Johne’s disease from the food supply? Assuming Johne’s is the culprit, this may be hard to do, (it’s pretty contagious in cattle, can lie dormant for years, and survives cooking) but drinking ultrapasteurized milk may be protective, especially for people who are susceptible to the disease.

***

However… there are also studies that contradict the MAP theory. For example, a recent study of the rate of Crohn’s disease in people exposed to Johne’s disease found no correllation. (However, Crohn’s is a pretty rare condition, and the survey only found 7 total cases, which is small enough that random chance could be a factor, but we are talking about people who probably got very up close and personal with feces infected with MAP.)

Another study found a negative correlation between Crohn’s and milk consumption:

Logistic regression showed no significant association with measures of potential contamination of water sources with MAP, water intake, or water treatment. Multivariate analysis showed that consumption of pasteurized milk (per kg/month: odds ratio (OR) = 0.82, 95% confidence interval (CI): 0.69, 0.97) was associated with a reduced risk of Crohn’s disease. Meat intake (per kg/month: OR = 1.40, 95% CI: 1.17, 1.67) was associated with a significantly increased risk of Crohn’s disease, whereas fruit consumption (per kg/month: OR = 0.78, 95% CI: 0.67, 0.92) was associated with reduced risk.

So even if Crohn’s is caused by MAP or something similar, it appears that people aren’t catching it from milk.

There are other theories about what causes Crohn’s–these folks, for example, think it’s related to consumption of GMO corn. Perhaps MAP has only been found in the intestines of Crohn’s patients because people with Crohn’s are really bad at fighting off infections. Perhaps the whole thing is caused by weird gut bacteria, or not enough parasites, insufficient Vitamin D, or industrial pollution.

The condition remains very much a mystery.

2 Interesting studies: Early Humans in SE Asia and Genetics, Relationships, and Mental Illness

Ancient Teeth Push Back Early Arrival of Humans in Southeast Asia :

New tests on two ancient teeth found in a cave in Indonesia more than 120 years ago have established that early modern humans arrived in Southeast Asia at least 20,000 years earlier than scientists previously thought, according to a new study. …

The findings push back the date of the earliest known modern human presence in tropical Southeast Asia to between 63,000 and 73,000 years ago. The new study also suggests that early modern humans could have made the crossing to Australia much earlier than the commonly accepted time frame of 60,000 to 65,000 years ago.

I would like to emphasize that nothing based on a couple of teeth is conclusive, “settled,” or “proven” science. Samples can get contaminated, machines make errors, people play tricks–in the end, we’re looking for the weight of the evidence.

I am personally of the opinion that there were (at least) two ancient human migrations into south east Asia, but only time will tell if I am correct.

Genome-wide association study of social relationship satisfaction: significant loci and correlations with psychiatric conditions, by Varun Warrier, Thomas Bourgeron, Simon Baron-Cohen:

We investigated the genetic architecture of family relationship satisfaction and friendship satisfaction in the UK Biobank. …

In the DSM-55, difficulties in social functioning is one of the criteria for diagnosing conditions such as autism, anorexia nervosa, schizophrenia, and bipolar disorder. However, little is known about the genetic architecture of social relationship satisfaction, and if social relationship dissatisfaction genetically contributes to risk for psychiatric conditions. …

We present the results of a large-scale genome-wide association study of social
relationship satisfaction in the UK Biobank measured using family relationship satisfaction and friendship satisfaction. Despite the modest phenotypic correlations, there was a significant and high genetic correlation between the two phenotypes, suggesting a similar genetic architecture between the two phenotypes.

Note: the two “phenotypes” here are “family relationship satisfaction” and “friendship satisfaction.”

We first investigated if the two phenotypes were genetically correlated with
psychiatric conditions. As predicted, most if not all psychiatric conditions had a significant negative correlation for the two phenotypes. … We observed significant negative genetic correlation between the two phenotypes and a large cross-condition psychiatric GWAS38. This underscores the importance of social relationship dissatisfaction in psychiatric conditions. …

In other words, people with mental illnesses generally don’t have a lot of friends nor get along with their families.

One notable exception is the negative genetic correlation between measures of cognition and the two phenotypes. Whilst subjective wellbeing is positively genetically correlated with measures of cognition, we identify a small but statistically significant negative correlation between measures of correlation and the two phenotypes.

Are they saying that smart people have fewer friends? Or that dumber people are happier with their friends and families? I think they are clouding this finding in intentionally obtuse language.

A recent study highlighted that people with very high IQ scores tend to report lower satisfaction with life with more frequent socialization.

Oh, I think I read that one. It’s not the socialization per se that’s the problem, but spending time away from the smart person’s intellectual activities. For example, I enjoy discussing the latest genetics findings with friends, but I don’t enjoy going on family vacations because they are a lot of work that does not involve genetics. (This is actually something my relatives complain about.)

…alleles that increase the risk for schizophrenia are in the same haplotype as
alleles that decrease friendship satisfaction. The functional consequences of this locus must be formally tested. …

Loss of function mutations in these genes lead to severe biochemical consequences, and are implicated in several neuropsychiatric conditions. For
example, de novo loss of function mutations in pLI intolerant genes confers significant risk for autism. Our results suggest that pLI > 0.9 genes contribute to psychiatric risk through both common and rare genetic variation.

Evolution is slow–until it’s fast: Genetic Load and the Future of Humanity

Source: Priceonomics

A species may live in relative equilibrium with its environment, hardly changing from generation to generation, for millions of years. Turtles, for example, have barely changed since the Cretaceous, when dinosaurs still roamed the Earth.

But if the environment changes–critically, if selective pressures change–then the species will change, too. This was most famously demonstrated with English moths, which changed color from white-and-black speckled to pure black when pollution darkened the trunks of the trees they lived on. To survive, these moths need to avoid being eaten by birds, so any moth that stands out against the tree trunks tends to get turned into an avian snack. Against light-colored trees, dark-colored moths stood out and were eaten. Against dark-colored trees, light-colored moths stand out.

This change did not require millions of years. Dark-colored moths were virtually unknown in 1810, but by 1895, 98% of the moths were black.

The time it takes for evolution to occur depends simply on A. The frequency of a trait in the population and B. How strongly you are selecting for (or against) it.

Let’s break this down a little bit. Within a species, there exists a great deal of genetic variation. Some of this variation happens because two parents with different genes get together and produce offspring with a combination of their genes. Some of this variation happens because of random errors–mutations–that occur during copying of the genetic code. Much of the “natural variation” we see today started as some kind of error that proved to be useful, or at least not harmful. For example, all humans originally had dark skin similar to modern Africans’, but random mutations in some of the folks who no longer lived in Africa gave them lighter skin, eventually producing “white” and “Asian” skin tones.

(These random mutations also happen in Africa, but there they are harmful and so don’t stick around.)

Natural selection can only act on the traits that are actually present in the population. If we tried to select for “ability to shoot x-ray lasers from our eyes,” we wouldn’t get very far, because no one actually has that mutation. By contrast, albinism is rare, but it definitely exists, and if for some reason we wanted to select for it, we certainly could. (The incidence of albinism among the Hopi Indians is high enough–1 in 200 Hopis vs. 1 in 20,000 Europeans generally and 1 in 30,000 Southern Europeans–for scientists to discuss whether the Hopi have been actively selecting for albinism. This still isn’t a lot of albinism, but since the American Southwest is not a good environment for pale skin, it’s something.)

You will have a much easier time selecting for traits that crop up more frequently in your population than traits that crop up rarely (or never).

Second, we have intensity–and variety–of selective pressure. What % of your population is getting removed by natural selection each year? If 50% of your moths get eaten by birds because they’re too light, you’ll get a much faster change than if only 10% of moths get eaten.

Selection doesn’t have to involve getting eaten, though. Perhaps some of your moths are moth Lotharios, seducing all of the moth ladies with their fuzzy antennae. Over time, the moth population will develop fuzzier antennae as these handsome males out-reproduce their less hirsute cousins.

No matter what kind of selection you have, nor what part of your curve it’s working on, all that ultimately matters is how many offspring each individual has. If white moths have more children than black moths, then you end up with more white moths. If black moths have more babies, then you get more black moths.

Source SUPS.org

So what happens when you completely remove selective pressures from a population?

Back in 1968, ethologist John B. Calhoun set up an experiment popularly called “Mouse Utopia.” Four pairs of mice were given a large, comfortable habitat with no predators and plenty of food and water.

Predictably, the mouse population increased rapidly–once the mice were established in their new homes, their population doubled every 55 days. But after 211 days of explosive growth, reproduction began–mysteriously–to slow. For the next 245 days, the mouse population doubled only once every 145 days.

The birth rate continued to decline. As births and death reached parity, the mouse population stopped growing. Finally the last breeding female died, and the whole colony went extinct.

source

As I’ve mentioned before Israel is (AFAIK) the only developed country in the world with a TFR above replacement.

It has long been known that overcrowding leads to population stress and reduced reproduction, but overcrowding can only explain why the mouse population began to shrink–not why it died out. Surely by the time there were only a few breeding pairs left, things had become comfortable enough for the remaining mice to resume reproducing. Why did the population not stabilize at some comfortable level?

Professor Bruce Charlton suggests an alternative explanation: the removal of selective pressures on the mouse population resulted in increasing mutational load, until the entire population became too mutated to reproduce.

What is genetic load?

As I mentioned before, every time a cell replicates, a certain number of errors–mutations–occur. Occasionally these mutations are useful, but the vast majority of them are not. About 30-50% of pregnancies end in miscarriage (the percent of miscarriages people recognize is lower because embryos often miscarry before causing any overt signs of pregnancy,) and the majority of those miscarriages are caused by genetic errors.

Unfortunately, randomly changing part of your genetic code is more likely to give you no skin than skintanium armor.

But only the worst genetic problems that never see the light of day. Plenty of mutations merely reduce fitness without actually killing you. Down Syndrome, famously, is caused by an extra copy of chromosome 21.

While a few traits–such as sex or eye color–can be simply modeled as influenced by only one or two genes, many traits–such as height or IQ–appear to be influenced by hundreds or thousands of genes:

Differences in human height is 60–80% heritable, according to several twin studies[19] and has been considered polygenic since the Mendelian-biometrician debate a hundred years ago. A genome-wide association (GWA) study of more than 180,000 individuals has identified hundreds of genetic variants in at least 180 loci associated with adult human height.[20] The number of individuals has since been expanded to 253,288 individuals and the number of genetic variants identified is 697 in 423 genetic loci.[21]

Obviously most of these genes each plays only a small role in determining overall height (and this is of course holding environmental factors constant.) There are a few extreme conditions–gigantism and dwarfism–that are caused by single mutations, but the vast majority of height variation is caused by which particular mix of those 700 or so variants you happen to have.

The situation with IQ is similar:

Intelligence in the normal range is a polygenic trait, meaning it’s influenced by more than one gene.[3][4]

The general figure for the heritability of IQ, according to an authoritative American Psychological Association report, is 0.45 for children, and rises to around 0.75 for late teens and adults.[5][6] In simpler terms, IQ goes from being weakly correlated with genetics, for children, to being strongly correlated with genetics for late teens and adults. … Recent studies suggest that family and parenting characteristics are not significant contributors to variation in IQ scores;[8] however, poor prenatal environment, malnutrition and disease can have deleterious effects.[9][10]

And from a recent article published in Nature Genetics, Genome-wide association meta-analysis of 78,308 individuals identifies new loci and genes influencing human intelligence:

Despite intelligence having substantial heritability2 (0.54) and a confirmed polygenic nature, initial genetic studies were mostly underpowered3, 4, 5. Here we report a meta-analysis for intelligence of 78,308 individuals. We identify 336 associated SNPs (METAL P < 5 × 10−8) in 18 genomic loci, of which 15 are new. Around half of the SNPs are located inside a gene, implicating 22 genes, of which 11 are new findings. Gene-based analyses identified an additional 30 genes (MAGMA P < 2.73 × 10−6), of which all but one had not been implicated previously. We show that the identified genes are predominantly expressed in brain tissue, and pathway analysis indicates the involvement of genes regulating cell development (MAGMA competitive P = 3.5 × 10−6). Despite the well-known difference in twin-based heritability2 for intelligence in childhood (0.45) and adulthood (0.80), we show substantial genetic correlation (rg = 0.89, LD score regression P = 5.4 × 10−29). These findings provide new insight into the genetic architecture of intelligence.

The greater number of genes influence a trait, the harder they are to identify without extremely large studies, because any small group of people might not even have the same set of relevant genes.

High IQ correlates positively with a number of life outcomes, like health and longevity, while low IQ correlates with negative outcomes like disease, mental illness, and early death. Obviously this is in part because dumb people are more likely to make dumb choices which lead to death or disease, but IQ also correlates with choice-free matters like height and your ability to quickly press a button. Our brains are not some mysterious entities floating in a void, but physical parts of our bodies, and anything that affects our overall health and physical functioning is likely to also have an effect on our brains.

Like height, most of the genetic variation in IQ is the combined result of many genes. We’ve definitely found some mutations that result in abnormally low IQ, but so far we have yet (AFAIK) to find any genes that produce the IQ gigantism. In other words, low (genetic) IQ is caused by genetic load–Small Yet Important Genetic Differences Between Highly Intelligent People and General Population:

The study focused, for the first time, on rare, functional SNPs – rare because previous research had only considered common SNPs and functional because these are SNPs that are likely to cause differences in the creation of proteins.

The researchers did not find any individual protein-altering SNPs that met strict criteria for differences between the high-intelligence group and the control group. However, for SNPs that showed some difference between the groups, the rare allele was less frequently observed in the high intelligence group. This observation is consistent with research indicating that rare functional alleles are more often detrimental than beneficial to intelligence.

Maternal mortality rates over time, UK data

Greg Cochran has some interesting Thoughts on Genetic Load. (Currently, the most interesting candidate genes for potentially increasing IQ also have terrible side effects, like autism, Tay Sachs and Torsion Dystonia. The idea is that–perhaps–if you have only a few genes related to the condition, you get an IQ boost, but if you have too many, you get screwed.) Of course, even conventional high-IQ has a cost: increased maternal mortality (larger heads).

Wikipedia defines genetic load as:

the difference between the fitness of an average genotype in a population and the fitness of some reference genotype, which may be either the best present in a population, or may be the theoretically optimal genotype. … Deleterious mutation load is the main contributing factor to genetic load overall.[5] Most mutations are deleterious, and occur at a high rate.

There’s math, if you want it.

Normally, genetic mutations are removed from the population at a rate determined by how bad they are. Really bad mutations kill you instantly, and so are never born. Slightly less bad mutations might survive, but never reproduce. Mutations that are only a little bit deleterious might have no obvious effect, but result in having slightly fewer children than your neighbors. Over many generations, this mutation will eventually disappear.

(Some mutations are more complicated–sickle cell, for example, is protective against malaria if you have only one copy of the mutation, but gives you sickle cell anemia if you have two.)

Jakubany is a town in the Carpathian Mountains

Throughout history, infant mortality was our single biggest killer. For example, here is some data from Jakubany, a town in the Carpathian Mountains:

We can see that, prior to the 1900s, the town’s infant mortality rate stayed consistently above 20%, and often peaked near 80%.

The graph’s creator states:

When I first ran a calculation of the infant mortality rate, I could not believe certain of the intermediate results. I recompiled all of the data and recalculated … with the same astounding result – 50.4% of the children born in Jakubany between the years 1772 and 1890 would diebefore reaching ten years of age! …one out of every two! Further, over the same 118 year period, of the 13306 children who were born, 2958 died (~22 %) before reaching the age of one.

Historical infant mortality rates can be difficult to calculate in part because they were so high, people didn’t always bother to record infant deaths. And since infants are small and their bones delicate, their burials are not as easy to find as adults’. Nevertheless, Wikipedia estimates that Paleolithic man had an average life expectancy of 33 years:

Based on the data from recent hunter-gatherer populations, it is estimated that at 15, life expectancy was an additional 39 years (total 54), with a 0.60 probability of reaching 15.[12]

Priceonomics: Why life expectancy is misleading

In other words, a 40% chance of dying in childhood. (Not exactly the same as infant mortality, but close.)

Wikipedia gives similarly dismal stats for life expectancy in the Neolithic (20-33), Bronze and Iron ages (26), Classical Greece(28 or 25), Classical Rome (20-30), Pre-Columbian Southwest US (25-30), Medieval Islamic Caliphate (35), Late Medieval English Peerage (30), early modern England (33-40), and the whole world in 1900 (31).

Over at ThoughtCo: Surviving Infancy in the Middle Ages, the author reports estimates for between 30 and 50% infant mortality rates. I recall a study on Anasazi nutrition which I sadly can’t locate right now, which found 100% malnutrition rates among adults (based on enamel hypoplasias,) and 50% infant mortality.

As Priceonomics notes, the main driver of increasing global life expectancy–48 years in 1950 and 71.5 years in 2014 (according to Wikipedia)–has been a massive decrease in infant mortality. The average life expectancy of an American newborn back in 1900 was only 47 and a half years, whereas a 60 year old could expect to live to be 75. In 1998, the average infant could expect to live to about 75, and the average 60 year old could expect to live to about 80.

Back in his post on Mousetopia, Charlton writes:

Michael A Woodley suggests that what was going on [in the Mouse experiment] was much more likely to be mutation accumulation; with deleterious (but non-fatal) genes incrementally accumulating with each generation and generating a wide range of increasingly maladaptive behavioural pathologies; this process rapidly overwhelming and destroying the population before any beneficial mutations could emerge to ‘save; the colony from extinction. …

The reason why mouse utopia might produce so rapid and extreme a mutation accumulation is that wild mice naturally suffer very high mortality rates from predation. …

Thus mutation selection balance is in operation among wild mice, with very high mortality rates continually weeding-out the high rate of spontaneously-occurring new mutations (especially among males) – with typically only a small and relatively mutation-free proportion of the (large numbers of) offspring surviving to reproduce; and a minority of the most active and healthy (mutation free) males siring the bulk of each generation.

However, in Mouse Utopia, there is no predation and all the other causes of mortality (eg. Starvation, violence from other mice) are reduced to a minimum – so the frequent mutations just accumulate, generation upon generation – randomly producing all sorts of pathological (maladaptive) behaviours.

Historically speaking, another selective factor operated on humans: while about 67% of women reproduced, only 33% of men did. By contrast, according to Psychology Today, a majority of today’s men have or will have children.

Today, almost everyone in the developed world has plenty of food, a comfortable home, and doesn’t have to worry about dying of bubonic plague. We live in humantopia, where the biggest factor influencing how many kids you have is how many you want to have.

source

Back in 1930, infant mortality rates were highest among the children of unskilled manual laborers, and lowest among the children of professionals (IIRC, this is Brittish data.) Today, infant mortality is almost non-existent, but voluntary childlessness has now inverted this phenomena:

Yes, the percent of childless women appears to have declined since 1994, but the overall pattern of who is having children still holds. Further, while only 8% of women with post graduate degrees have 4 or more children, 26% of those who never graduated from highschool have 4+ kids. Meanwhile, the age of first-time moms has continued to climb.

In other words, the strongest remover of genetic load–infant mortality–has all but disappeared; populations with higher load (lower IQ) are having more children than populations with lower load; and everyone is having children later, which also increases genetic load.

Take a moment to consider the high-infant mortality situation: an average couple has a dozen children. Four of them, by random good luck, inherit a good combination of the couple’s genes and turn out healthy and smart. Four, by random bad luck, get a less lucky combination of genes and turn out not particularly healthy or smart. And four, by very bad luck, get some unpleasant mutations that render them quite unhealthy and rather dull.

Infant mortality claims half their children, taking the least healthy. They are left with 4 bright children and 2 moderately intelligent children. The three brightest children succeed at life, marry well, and end up with several healthy, surviving children of their own, while the moderately intelligent do okay and end up with a couple of children.

On average, society’s overall health and IQ should hold steady or even increase over time, depending on how strong the selective pressures actually are.

Or consider a consanguineous couple with a high risk of genetic birth defects: perhaps a full 80% of their children die, but 20% turn out healthy and survive.

Today, by contrast, your average couple has two children. One of them is lucky, healthy, and smart. The other is unlucky, unhealthy, and dumb. Both survive. The lucky kid goes to college, majors in underwater intersectionist basket-weaving, and has one kid at age 40. That kid has Down Syndrome and never reproduces. The unlucky kid can’t keep a job, has chronic health problems, and 3 children by three different partners.

Your consanguineous couple migrates from war-torn Somalia to Minnesota. They still have 12 kids, but three of them are autistic with IQs below the official retardation threshold. “We never had this back in Somalia,” they cry. “We don’t even have a word for it.”

People normally think of dysgenics as merely “the dumb outbreed the smart,” but genetic load applies to everyone–men and women, smart and dull, black and white, young and especially old–because we all make random transcription errors when copying our DNA.

I could offer a list of signs of increasing genetic load, but there’s no way to avoid cherry-picking trends I already know are happening, like falling sperm counts or rising (diagnosed) autism rates, so I’ll skip that. You may substitute your own list of “obvious signs society is falling apart at the genes” if you so desire.

Nevertheless, the transition from 30% (or greater) infant mortality to almost 0% is amazing, both on a technical level and because it heralds an unprecedented era in human evolution. The selective pressures on today’s people are massively different from those our ancestors faced, simply because our ancestors’ biggest filter was infant mortality. Unless infant mortality acted completely at random–taking the genetically loaded and unloaded alike–or on factors completely irrelevant to load, the elimination of infant mortality must continuously increase the genetic load in the human population. Over time, if that load is not selected out–say, through more people being too unhealthy to reproduce–then we will end up with an increasing population of physically sick, maladjusted, mentally ill, and low-IQ people.

(Remember, all mental traits are heritable–so genetic load influences everything, not just controversial ones like IQ.)

If all of the above is correct, then I see only 4 ways out:

  1. Do nothing: Genetic load increases until the population is non-functional and collapses, resulting in a return of Malthusian conditions, invasion by stronger neighbors, or extinction.
  2. Sterilization or other weeding out of high-load people, coupled with higher fertility by low-load people
  3. Abortion of high load fetuses
  4. Genetic engineering

#1 sounds unpleasant, and #2 would result in masses of unhappy people. We don’t have the technology for #4, yet. I don’t think the technology is quite there for #2, either, but it’s much closer–we can certainly test for many of the deleterious mutations that we do know of.

Sweet Poison: Life with Hypoglycemia

Note: I am not a doctor nor any other kind of medical professional. This post is not intended to be medical advice, but a description of one person’s personal experience. Please consult with a real medical professional if you need advice about any medical problems. Thank you.

Hypoglycemia is a medical condition in which the sufferer has too little sugar (glucose) in their bloodstream, like an inverse diabetes. Diabetics suffer an inability to produce/absorb insulin, without which their cells cannot properly absorb glucose from the blood. Hypoglycemics over-produce/absorb insulin, driving too much sugar into the body and leaving too little in the blood.

There are actually two kinds of hypoglycemia–general low blood sugar, which can be caused by not having eaten recently, and reactive hypoglycemia, caused by the body producing too much insulin in response to a sugar spike.

What does hypoglycemia feel like?

It’s difficult to describe, and I make no claim that this is how other hypoglycemics feel, but for me it’s a combination of feeling like my heart is beating too hard and weakness in my limbs. I start feeling light-headed, shaky, and in extreme cases, can collapse and pass out.

It’s not fun.

So how do I know it’s not just psychosomatic?

The simple answer is that sometimes I start feeling nasty after eating something I was told “has no sugar,” check the label, and sure enough, there’s sugar.

By the way, “evaporated cane juice” IS SUGAR.

It took several years to piece all of the symptoms together and figure out that my light-headed fainting spells were a result of eating specific foods, and that I could effectively prevent them by changing my diet and making sure I eat regularly.

I don’t fancy doing experiments on myself by purposefully trying to trigger hypoglycemia, so my list of foods I avoid can’t be exact, but extrapolated based on what I’ve experienced:

More than a couple bites of any high-sugar item like ice cream, candy, cookies, chocolate, or flavored yogurt.

Yes, yogurt. Lots of people like to tout flavored yogurts as “health food.” Bollocks. They strip out the good, tasty fats and then try make it palatable again by loading it up with sugar, creating an abomination that makes me feel as nasty as a bowl of ice cream. “Health food” my butt.

I also avoid all sugary drinks, like soda and fruit juice.

Yes, fruit juice. Fruit juice is mostly fructose, a kind of sugar, and your body processes it into glucose just like other sugars. A cup or two of juice and I start feeling the effects, just like any other sugary thing.

(Note: the exact mechanism of sugar metabolism varies according to the chemical structure of the individual sugar, but all sugars get broken down into glucose. Fruit sugar is fructose, the same stuff as is in High Fructose Corn Syrup.)

I generally don’t have a problem eating fruit.

I don’t eat/drink products with fake sugars, like Diet Soda or sugar-free ice cream, on the grounds that I don’t really know how the body will ultimately react to these artificial chemicals and because I don’t want to develop a taste for sweet things. There’s a lot of habit involved in eating, and if I start craving sweets that I can’t have, I’m going to be a lot more miserable than if I drink a glass of water now and forgo a Diet Soda.

A quick digression about artificial foods: once upon a time, people were very concerned about saturated fats in their diets, so they started eating foods with laboratory-produced “trans fats” instead. The differences between regular fats and trans fats are chemical; the regular fat it’s based on is typically a liquid, (that is, an oil,) and essentially moving one of the molecules in the fat from one side to the other creates a room-temperature solid. The great thing about trans fats is that they’re shelf-stable–that is, they won’t go rancid quickly at room temperature–and can be made from plant oils instead of animals fats. (Plants are much cheaper to grow than animals.) The downside to trans fats is that our bodies aren’t quite sure how to digest them and incorporate them into cell membranes and they appear to ultimately give you cancer.

So… You were probably better off just frying things in lard like the Amish do than switching to margarine.

The moral of the story is that I am skeptical of lab-derived foods. They might be just fine, but I have plenty to eat and drink already so I don’t see any reason to take a chance.

Finally, I eat bananas, pasta, and cereals in moderation, and certainly not in the morning. These are all items with complex sugars, so they aren’t as bad as the pure sugar items, but I am cautious about them.

Yes, timing matters–your body absorbs sugars more quickly after fasting than when you’ve already eaten, which is why your mother always told you to eat your dinner first and desert second. My hypoglycemia is therefore worst in the morning, when I haven’t eaten yet. Back in the day I had about 20 to 30 minutes after waking up to get breakfast or else I would start getting shaky and weak and have to lie down and try to convince someone else to get me some breakfast. Likewise, if I ate the wrong things for breakfast, like sugary cereals or bananas, I also had to lie down afterwards.

I’ve since discovered that if I have a cup of coffee first thing in the morning, my blood sugar doesn’t crash and I have a much longer window in which to eat breakfast, so I have time to get the kids ready for school and then eat. I don’t know what exactly it is about the coffee that helps–is it just having a cup of liquid? Is it the milk I put in there? The coffee itself? All three together? I just know that it works.

As with all things food and diet related, it’s probably more useful to know what I can eat than what I don’t: Meat, milk, cheese, sandwiches, lasagna, nuts, peanut butter, potatoes with butter + cheese, beets, soup, soy, coconuts, pizza, most fruit, coffee, tea, etc.

It’s really not bad.

In the beginning, I was occasionally sad because I’d get dragged to the ice cream shop and watch everyone else eat ice cream while I couldn’t have any (technically I can have a couple of bites but they don’t sell it in that quantity.) But when eating something makes you feel really bad, you tend to stop wanting to eat it.

So long as I have my morning coffee, avoid sweets, and eat at regular intervals, I feel 100% fine. (And coffee excepted, this is what nutritionists say you’re supposed to do, anyway.) I don’t feel sick, I don’t feel weak or dizzy, I don’t shake, etc.

The only problem, such as it is, is that I live in a society that assumes I can eat sugar and assumes that I am concerned about diabetes and gaining weight. Every pregnancy, for example, doctors try to test me for gestational diabetes. The gestational diabetes test involves fasting, drinking a bottle of pure glucose, and then seeing what my insulin levels do. I have yet to talk to any ob-gyn (or midwife practice) with policies in place for handling hypoglycemic patients. Every single one has a blanket policy of making all of their patients drink bottles of glucose. No, I am not drinking your goddamn glucose.

Obviously I have to bring a sack lunch to group events where the “catered meal” turns out to be donuts and cookies. (“Oh but there is a tray with celery on it! You can eat that, right?” No. No I can’t. I can’t keep my blood sugar levels from dropping by eating celery.) And of course I look like a snob at parties (No, sorry, I don’t want the punch. No, no pie for me. No, no cookies. Ah, no, I don’t eat cake. Look, do you have any potatoes?) But these are minor quibbles, and easily dealt with. Certainly compared with Type I Diabetics who must constantly monitor their blood sugar levels and inject insulin, I have nothing to complain about. To be honest, I don’t even think of myself as having a problem, I just think of society as weird.

F. daltoniana, Himalayan strawberry

Step back a moment and look at matters in historical perspective. For about 190,000 years, all humans ate hunter-gatherer diets. About 10,000 years ago, more or less, our ancestors started practicing agriculture and began eating lots of grain. (Hunter-gatherers also ate grain, but not in the same quantities.) Only in the past couple of centuries has refined sugar become widespread, and only in the past few decades have sugars like HFCS become routinely added to regular foods.

Consider fruit juice, which seems natural. It actually takes a fair amount of energy (often mechanized) to squeeze the juice out of an apple. Most of the juice our ancestors drank was fermented, ie, hard cider or wine, which was necessary to keep it from going bad in the days before pasteurization and modern bottling techniques. Fermentation, of course, whether in pickles, yogurt, wine, or bread, transforms natural sugars into acids, alcohols, or gasses (the bubbles in bread.)

In other words, your ancestors probably didn’t drink too many glasses of fresh, unfermented juice. Even modern fruit is probably much sweeter than the fruits our ancestors ate–compare the sugar levels of modern hybrid corns developed in laboratories to their ancestors from the eighteen hundreds, for example. (Yes, I know corn is a “grain” and not a “fruit.” Also, a banana is technically a “berry” but a raspberry is not. It’s a “clusterfruit.” These distinctions are irrelevant to the question of how much fructose is in the plant.)

Or as Anastapoulo writes on the history of apples:

The apple was first brought to the United States by European settlers seeking freedom in a new world. At first, however, these European cultivars failed to thrive in the American climate, having adapted to environmental conditions an ocean away. They did, however, release seeds, leading to the fertilization and eventual germination of countless new apple breeds. Suddenly, the number of domesticated apples in North America skyrocketed, and the species displayed an amount of genetic diversity that far surpassed that of Europe or other areas of the world (Juniper).

…Traditionally, apple production had been a domestic affair, with most crops being grown on private properties and family orchards. However, a rise in commercial agriculture at the beginning of the twentieth century, the institution of industrial farming practices, and the introduction of electric refrigeration in transportation all impacted the process of growing apples, and these innovations caused the industry to grow. This expansion of commercial apple growing eventually caused apple biodiversity to decline because growers decided to narrow apple production to only a handful of select cultivars based primarily on two key selling factors: sweetness and appearance. In so doing, the thousands of other existing apple varieties, each specialized for a different use, started to become obsolete in the face of more universally accepted varieties, including the infamous Red Delicious, a sugary sweet and visually appealing apple that has become the poster child of the industry (Pollan). …

Rather than rely on natural crossbreeding and pure chance to hopefully create a successful apple variety, growers instead turned to science, and they began implementing breeding practices to develop superior apples that embodied their desired characteristics. … As a result, heirloom and other traditional varieties became all but irrelevant; banished from commercial orchards, they were left to grow in front yards, small local orchards, or in the wild. … Indeed, according to one study, of the 15,000 varieties of apples that were once grown in North America, about eighty percent have vanished (O’Driscoll). It should be noted that a number of these faded because they were originally grown for hard cider, a beverage that fell out of popularity during Prohibition. … Such practices now mean that forty percent of apples sold in grocery markets are a single variety: the Red Delicious (O’Driscoll).

There’s certainly nothing evolutionarily normal about eating ice cream for dinner–your ancestors didn’t even have refrigerators.

So to me, the odd thing isn’t that I can’t eat these strange new foods in large quantities, but that so many other people go ahead and eat them.

Yes, I know they taste good. But like most people, I have normative biases that make me assume that everyone else thinks the same way I do, so I find it weird that “food that makes people feel bad” is so common.

And you might say, “Well, it doesn’t actually make other people feel bad; everyone else can eat these things without trouble,” but last time I checked, society was “suffering an obesity epidemic,” the majority of people were overweight, “metabolic syndrome,” pre-diabetes and Type II Diabetes were rampant, etc., so I really don’t think everyone else can eat these things without trouble. Maybe it’s a different, less immediately noticeable kind of trouble, but it’s trouble nonetheless.

Ultimately, maybe hypoglycemia is a blessing in disguise.

Anthropology Friday: Gypsies

Vincent Van Gogh, The Caravans
Vincent Van Gogh, The Caravans

It is easy to romanticize the Gypsies–quaint caravans, jaunty music, and the nomadic lifestyle of the open road all lend themselves to pleasant fantasies. The reality of Gypsy life is much sadder. They are plagued by poverty, illiteracy, violence, the diseases of high consanguinity, and the meddling of outsiders, some better intentioned than others.

I’m going to start off with something which, if true, is rather poetic.

The Gypsies refer to themselves as Rom (or Romani.) I prefer “Gypsy” because I am an American who speaks English and “Gypsy” is the most accepted, well-known ethnonym in American English, but I am also familiar with Rom.

Anyway, there are a couple of other nomadic groups which appear to be related to the Rom, called the Lom and Dom (their langauges, respectively, are Romani, Domari, and Lomavren.) Genetically, these three groups may be the results of different waves of migration from India, where they may have originated from the Domba (or Dom) people.

All four groups speak Indo-European languages. According to Wikipedia:

Its presumed root, ḍom, which is connected with drumming, is linked to damara and damaru, Sanskrit terms for “drum” and the Sanskrit verbal root डम् ḍam- ‘to sound (as a drum)’, perhaps a loan from Dravidian, e.g. Kannadaḍamāra ‘a pair of kettle-drums’, and Teluguṭamaṭama ‘a drum, tomtom‘.[2]

The Gypsy flag features, appropriately, a wheel
The Gypsy flag features, appropriately, a wheel

Given the Gypsies’ reputation for musical ability, there is something lovely and poetic about having a name that literally means “Drum.”

Unfortunately, the rest of the picture is not so cheerful.

Isabel Fonseca recounts in Bury me Standing: The Gypsies and their Journey:

The new socialist government in postwar Poland aspired to build a nationally and ethnically homogenous state. Although the Gypsies accounted for about .005 percent of the population, “the Gypsy problem” was labeled an “important sate task,” and an Office of Gypsy Affairs was established under the jurisdiction of the Ministry of Internal Affairs–that is, the police. It was in operation until 1989.

In 1952 a broad program to enforce the settlement of Gypsies also came into effect: it was known a the Great Halt … The plan belonged to the feverish fashion for “productivization” which, with its well-intentioned welfare provisions, in fact imposed a new culture of dependency on the Gypsies, who had always opposed it. Similar legislation would be adopted in Czechoslovakia (1958), in Bulgaria (1958), and in Romania (1962), as the vogue for forced assimilation gathered momentum. … by the late 1960s settlement was the goal everywhere. In England and Wales … the 1968 Caravan Sites Act aimed to settle Gypsies (partially by a technique of population control known as “designation” in which whole large areas of the country were declared off-limits to Travelers). …

But no one has ever thought to ask the Gypsies themselves. And accordingly all attempts at assimilation have failed. …

In a revised edition of his great book The Gypsies in Poland, published in 1984, Ficowski reviews the results of the Big Halt campaign. “Gypsies no longer lead a nomadic life, and the number of illiterates has considerably fallen.” But even these gains were limited because Gypsy girls marry at the age of twelve or thirteen, and because “in the very few cases where individuals are properly educated, they usually tend to leave the Gypsy community.” The results were disastrous: “Opposition to the traveling of the Gypsy craftsmen, who had taken their tinsmithing or blacksmithing into the uttermost corner of the country, began gradually to bring about the disappearance of … most of the traditional Gypsy skills.” And finally, “after the loss of opportunities to practice traditional professions, [for many Gypsies] the main source of livelihood became preying on the rest of society.” Now there really was something to be nostalgic about. Wisdom comes too late. The owl of Minerva flies at dusk.

That a crude demographic experiment ended in rootlessness and squalor is neither surprising nor disputed… “

Cabrini Green, circa 1960
Cabrini Green, circa 1960

Of course, Gypsy life was not so great before settlement, either. Concentrations of poverty in the middle of cities may be much easier to measure and deplore than half-invisible migrant people on the margins of society, but no one appreciates being rounded up and forced into ghettos.

I am reminded here of all of the similar American attempts, from Pruit Igoe to Cabrini Green. Perhaps people had good intentions upon building these places. New, clean, cheap housing. A community of people like oneself, in the heart of a thriving city.

And yet they’ve all failed pretty miserably.

On the other hand, the Guardian reports on  violence (particularly domestic) in Gypsy communities in Britain:

..a study in Wrexham, cited in a paper by the Equality and Human Rights Commission, 2007, found that 61% of married English Gypsy women and 81% of Irish Travellers had experienced domestic abuse.

The Irish Travellers are ethnically Irish, not Gypsy, but lead similarly nomadic lives.

“I left him and went back to my mammy but he kept finding me, taking me home and getting me pregnant,” Kathleen says. She now feels safe because she has male family members living on the same site. “With my brother close by, he wouldn’t dare come here.” …

But domestic violence is just one of the issues tackled by O’Roarke during her visits. The welfare needs, particularly those of the women and girls, of this community are vast. The women are three times more likely to miscarry or have a still-born child compared to the rest of the population, mainly, it is thought, as a result of reluctance to undergo routine gynaecological care, and infections linked to poor sanitation and lack of clean water. The rate of suicides among Traveller women is significantly higher than in the general population, and life expectancy is low for women and men, with one third of Travellers dying before the age of 59. And as many Traveller girls are taken out of education prior to secondary school to prevent them mixing with boys from other cultures, illiteracy rates are high. …

Things seem set to get worse for Traveller women. Only 19 days after the general election last year, £50m that had been allocated to building new sites across London was scrapped from the budget. O’Roarke is expecting to be the only Traveller liaison worker in the capital before long – her funding comes from the Irish government.

“Most of the women can’t read or write. Who is supposed to help them if they get rid of the bit of support they have now?” asks O’Roarke. “We will be seeing Traveller women and their children on the streets because of these cuts. If they get a letter saying they are in danger of eviction but they can’t read it, what are they supposed to do?”

August Von Pettenkofen, Gipsy Children
August Von Pettenkofen, Gipsy Children

Welfare state logic is painful. Obviously Britain is a modern, first-world country with a free education system in which any child, male or female, can learn to read (unless they are severely low-IQ.) If Gypsies and Travellers want to preserve their cultures with some modicum of dignity, then they must read, because literacy is necessary in the modern economy. Forced assimilation or not, no one really needs traditional peripatetic tin and blacksmiths anymore. Industrialization has eliminated such jobs.

Kathleen, after spending time in a refuge after finally managing to escape her husband, was initially allocated a house, as opposed to a plot on a [trailer] site. Almost immediately her children became depressed. “It’s like putting a horse in a box. He would buck to get out,” says Kathleen. “We can’t live in houses; we need freedom and fresh air. I was on anti-depressives. The children couldn’t go out because the neighbours would complain about the noise.”

Now this I am more sympathetic to. While I dislike traveling, largely because my kids always get carsick, I understand that plenty of people actually like being nomadic. Indeed, I wouldn’t be surprised if some people were genetically inclined to be outside, to move, to be constantly on the road, while others were genetically inclined to settle down in one place. To try to force either person into a lifestyle contrary to their own nature would be cruel.

Disease, lifestyle, and consanguinity in 58 American Gypsies:

Medical data on 58 Gypsies in the area of Boston, Massachusetts, were analysed together with a pedigree linking 39 of them in a large extended kindred. Hypertension was found in 73%, diabetes in 46%, hypertriglyceridaemia in 80%, hypercholesterolaemia in 67%, occlusive vascular disease in 39%, and chronic renal insufficiency in 20%. 86% smoked cigarettes and 84% were obese. Thirteen of twenty-one marriages were consanguineous, yielding an inbreeding coefficient of 0.017. The analysis suggests that both heredity and environment influence the striking pattern of vascular disease in American Gypsies.

Genetic studies of the Roma (Gypsies) A review:

Although far from systematic, the published information indicates that medical genetics has an important role to play in improving the health of this underprivileged and forgotten people of Europe. Reported carrier rates for some Mendelian disorders are in the range of 5 -15%, sufficient to justify newborn screening and early treatment, or community-based education and carrier testing programs for disorders where no therapy is currently available. …

12881_2001_article_6_fig2_htmlReported gene frequencies are high for both private and “imported” mutations, and often exceed by an order of magnitude those for global populations. For example, galactokinase deficiency whose worldwide frequency is 1:150,000 to 1:1,000,000 [56,57] affects 1 in 5,000 Romani children [44]; autosomal dominant polycystic kidney disease (ADPKD) has a global prevalence of 1:1000 individuals worldwide [58] and 1:40 among the Roma in some parts of Hungary [17]; primary congenital glaucoma ranges between 1:5,000 and 1:22,000 worldwide [59,60] and about 1:400 among the Roma in Central Slovakia [61,62].

Carrier rates for a number of disorders have been estimated to be in the 5 to 20% range (Table 3). …

Historical demographic data are limited, however tax registries and census data give an approximate idea of population size and rate of demographic growth through the centuries (Table 4). A small size of the original population is suggested by the fact that although most of the migrants arriving in Europe in the 11th-12th century remained within the limits of the Ottoman Empire [1,75], the overall number of Roma in its Balkan provinces in the 15th century was estimated at only 17,000. …

12881_2001_article_6_fig4_htmlDuring its subsequent history in Europe, this founder population split into numerous socially divided and geographically dispersed endogamous groups, with historical records from different parts of the continent consistently describing the travelling Gypsies as “a group of 30 to 100 people led by an elder” [1,2]. These splits, a possible compound product of the ancestral tradition of the jatis of India, and the new social pressures in Europe (e.g. Gypsy slavery in Romania [76] and repressive legislation banning Gypsies from most western European countries [1,2]), can be regarded as secondary bottlenecks, reducing further the number of unrelated founders in each group. The historical formation of the present-day 8 million Romani population of Europe is therefore the product of the complex initial migrations of numerous small groups, superimposed on which are two large waves of recent migrations from the Balkans into Western Europe, in the 19th – early 20th century, after the abolition of slavery in Rumania [1,2,76] and over the last decade, after the political changes in Eastern Europe [7,8]. …

Individual groups can be classified into major metagroups [1,2,75]: the Roma of East European extraction; the Sinti in Germany and Manouches in France and Catalonia; the Kaló in Spain, Ciganos in Portugal and Gitans of southern France; and the Romanichals of Britain [1]. The greatest diversity is found in the Balkans, where numerous groups with well defined social boundaries exist. The 700-800,000 Roma in Bulgaria belong to three metagroups, comprising a large number of smaller groups [75].

Current Developments in Anthropological Genetics reports:

picture-2 picture-3

Of course, if you want the full details on consanguinity in Gypsies, you have to read HBDChick:

the actual cousin marriage rates vary though from (as you’ll see below) ca. 10-30% first cousin only marriages amongst gypsies in slovakia to 29% first+second cousin marriages amongst gypsies in spain [pdf] to 36% first+second cousin marriages amongst gypsies in wales [pdf]. these rates are comparable to those found in places like turkey (esp. eastern turkey) or north africa…or southern india.

I’m not quoting the whole thing for you; you’ll just have to go read the whole thing yourself.

Health Status of Gypsies and Travellers in England:

The 1987 national study of Travellers’ health status in Ireland11 reported a high death rate for all causes and lower life expectancy for Irish Travellers: women 11.9 years and men 9.9 years lower than the non‐Traveller population. Our pilot study of 87 Gypsies and Travellers matched for age and sex with indigenous working class residents in a socially deprived area of Sheffield,12 reported statistically and clinically significant differences between Gypsies and Travellers and their non‐Gypsy comparators in some aspects of health status, and significant associations with smoking and with frequency of travelling. The report of the Confidential Enquiries into Maternal Deaths in the UK, 1997–1999, found that Gypsies and Travellers have “possibly the highest maternal death rate among all ethnic groups”.13

picture-1And as Dr. James Thompson notes, Gypsies do not do well on IQ tests, with many groups scoring in the 60-85 range. (White Americans average 100.)

This is all kind of depressing, but I have a thought: if Gypsies want to preserve their culture and improve their lives, perhaps the disease burden may be lessened and IQs raised by encouraging young Gypsy men and women to find partners in other Gypsy groups from other countries instead of from within their own kin groups.

According to Wikipedia:

Further evidence for the South Asian origin of the Romanies came in the late 1990s. Researchers doing DNA analysis discovered that Romani populations carried large frequencies of particular Y chromosomes (inherited paternally) and mitochondrial DNA (inherited maternally) that otherwise exist only in populations from South Asia.

47.3% of Romani men carry Y chromosomes of haplogroup H-M82 which is rare outside South Asia.[18] Mitochondrial haplogroup M, most common in Indian subjects and rare outside Southern Asia, accounts for nearly 30% of Romani people.[18] A more detailed study of Polish Roma shows this to be of the M5 lineage, which is specific to India.[19] Moreover, a form of the inherited disorder congenital myasthenia is found in Romani subjects. This form of the disorder, caused by the 1267delG mutation, is otherwise known only in subjects of Indian ancestry. This is considered to be the best evidence of the Indian ancestry of the Romanis.[20]

Map of Gypsy migrations into Europe
Map of Gypsy migrations into Europe

I must stop here and note that I have painted a largely depressing picture. It is not the picture I want to paint. I would like to paint a picture of hope and triumph. Certainly there are many talented, hard-working, kind, decent, and wonderful Gypsies. I hope the best for them, and a brighter future for their children.

The evolution of fraud

Much of evolutionary literature focuses on the straightforward relationship between predator and prey, or on competition between members of the same species for limited resources, mates, etc.

But today we’re going to focus on fraud.

Milk Snake
Milk Snake
Coral Snake
Coral Snake

Red touch yellow, kill a fellow. Red touch black, friend to Jack.

The Coral snake is deadly poisonous. (Or venomous, as they say.) The Milk snake is harmless, but by mimicking the coral’s red, black, and yellow bands, it tricks potential predators into believing that it, too, will kill them.

Eastern_Phoebe-nest-Brown-headed-Cowbird-eggThe milk snake is a fraud, benefiting from the coral’s venom without producing any of its own.

Nature has many frauds, from the famously brood-parasitic Cuckoos to the nightmare-fuel snail eyestalk-infecting flatworms, to the fascinating mimic octopus, who can change the colors and patterns on its skin in the blink of an eye.

But just as predator and prey evolve in tandem, the prey developing new strategies to outwit predators, and predators in turn developing new strategies to defeat the prey’s new strategies. So also with fraud; animals who detect frauds out-compete those who are successfully deceived.

Complex human systems depend enormously on trust–and thus are prime breeding grounds for fraud.

Let’s take the job market. Employers want to hire the best employees possible (at the lowest possible prices, of course.) So employers do their best to (efficiently) screen potential candidates for work-related qualities like diligence, honesty, intelligence, and competency.

Employees want to eat. Diligence, honesty, years spent learning how to do a particular job, etc., are not valued because they help the company, but because they result in eating (and, if you’re lucky, reproduction.)

When there are far more employees competing against each other for jobs than there are openings, not only do employers have a chance to ratchet up the qualifications they demand in applicants, they pretty much have to. No employer trying to fill a single position has time to read 10,000 resumes, nor would it be in their interest to do so. So employers come up with requirements–often totally arbitrary–to automatically cut down the number of applications.

“Must have 3-5 years work experience” = people with 6 years of experience automatically rejected.

“Must be currently employed with no gaps in resume” = no one who took time off to have children. (This is one of the reasons birthrates are so low.)

“Must have X degree” = person with 15 years experience in the field but no degree automatically rejected.

The result, of course, is that prospective employees begin lying, cheating, or finding other deceptive ways to trick employers into reading their resumes. Workers with 6 years of experience put down 5. Workers with 2 record 3. People who can’t get into American medical schools attend Caribbean ones. “Brought donuts to the meeting” is inflated to “facilitated cross-discipline network conversation.” Whites who believe employers are practicing AA tickybox “black” on their applications. And as more and more jobs that formerly required nothing more than graduating college start requiring college degrees, more and more colleges start offering bullshit degrees so that everyone can get one.

The higher the competition and more arbitrary the rules, the higher the incentives for cheating.

The problem is particularly bad (or at least blatant) in many developing countries, eg, The Mystery of India’s Deadly Exam Scam:

It began with a test-fixing scandal so massive that it led to 2,000 arrests, including top politicians, academics and doctors. Then suspects started turning up dead. What is the truth behind the Vyapam scam that has gripped India? …

For at least five years, thousands of young men and women had paid bribes worth millions of pounds in total to a network of fixers and political operatives to rig the official examinations run by the Madhya Pradesh Vyavsayik Pariksha Mandal – known as Vyapam – a state body that conducted standardised tests for thousands of highly coveted government jobs and admissions to state-run medical colleges. When the scandal first came to light in 2013, it threatened to paralyse the entire machinery of the state administration: thousands of jobs appeared to have been obtained by fraudulent means, medical schools were tainted by the spectre of corrupt admissions, and dozens of officials were implicated in helping friends and relatives to cheat the exams. …

The list of top state officials placed under arrest reads like the telephone directory of the Madhya Pradesh secretariat. The most senior minister in the state government, Laxmikant Sharma – who had held the health, education and mining portfolios – was jailed, and remains in custody, along with his former aide, Sudhir Sharma, a former schoolteacher who parlayed his political connections into a vast mining fortune.

One of the things I find amusing (and, occasionally, frustrating) about Americans is that many of us are still so trusting. What we call “corruption”–what we imagine as an infection in an otherwise healthy entity–is the completely normal way of doing business throughout most of the world. (I still run into people who are surprised to discover that there are a lot of scams being run out of Nigeria. Nigerian scammers? Really? You don’t say.)

It’s good to get out of your bubble once in a while. Go hang out on international forums with people from the third world, and listen in on some of the conversations between Indians and Pakistanis or Indians and Chinese. Chinese and Indians constantly accuse each other’s countries of engaging in massive educational cheating.

Maybe they know something we don’t.

People want jobs because jobs mean eating; a good job means good eating, ergo every family worth its salt wants their children to get good jobs. But in a nation with 1.2 billion people and only a few good jobs, competition is ferocious:

In 2013, the year the scam was first revealed, two million young people in Madhya Pradesh – a state the size of Poland, with a population greater than the UK – sat for 27 different examinations conducted by Vyapam. Many of these exams are intensely competitive. In 2013, the prestigious Pre-Medical Test (PMT), which determines admission to medical school, had 40,086 applicants competing for just 1,659 seats; the unfortunately named Drug Inspector Recruitment Test (DIRT), had 9,982 candidates striving for 16 vacancies in the state department of public health.

For most applicants, the likelihood of attaining even low-ranking government jobs, with their promise of long-term employment and state pensions, is incredibly remote. In 2013, almost 450,000 young men and women took the exam to become one of the 7,276 police constables recruited that year – a post with a starting salary of 9,300 rupees (£91) per month. Another 270,000 appeared for the recruitment examination to fill slightly more than 2,000 positions at the lowest rank in the state forest service.

Since no one wants to spend their life picking up trash or doing back-breaking manual labor in the hot sun, the obvious solution is to cheat:

The impersonators led the police to Jagdish Sagar, a crooked Indore doctor who had set up a lucrative business that charged up to 200,000 rupees (£2,000) to arrange for intelligent but financially needy medical students to sit examinations on behalf of applicants who could afford to pay.

The families of dumb kids pay for smart kids to take tests for them.

In 2009, police claim, Sagar and Mohindra [Vypam’s systems analyst/data entry guy] had a meeting in Sagar’s car in Bhopal’s New Market bazaar, where the doctor made an unusual proposition: he would give Mohindra the application forms of groups of test-takers, and Mohindra would alter their “roll numbers” to ensure they were seated together so they could cheat from each other. According to Mohindra’s statement to the police, Sagar “offered to pay me 25,000 rupees (£250) for each roll number I changed.”

This came to be known as the “engine-bogie” system. The “engine” would be one of Sagar’s impostors – a bright student from a medical college, taking the exam on behalf of a paying customer – who would also pull along the lower-paying clients sitting next to him by supplying them with answers. … From 2009 to 2013, the police claim, Mohindra tampered with seating assignment for at least 737 of Sagar’s clients taking the state medical exam. …

Mohinda also began just straight-up filling in the bubbles and altering exam scores in the computer for rich kids whose parents had paid him off.

Over the course of only two years, police allege, Mohindra and Trivedi conspired to fix the results of 13 different examinations – for doctors, food inspectors, transport constables, police constables and police sub-inspectors, two different kinds of school teachers, dairy supply officers and forest guards – which had been taken by a total of 3.2 million students.

Remember this if you ever travel to India.

But merely uncovering the scam does not make it go away; witnesses begin dying:

In July 2014, the dean of a medical college in Jabalpur, Madhya Pradesh, Dr SK Sakalle – who was not implicated in the scandal, but had reportedly investigated fraudulent medical admissions and expelled students accused of obtaining their seats by cheating – was found burned to death on the front lawn of his own home. …

In an interview with the Hindustan Times earlier this year, a policeman, whose own son was accused in the scam and died in a road accident, advanced an unlikely yet tantalising theory. He argued that the Vyapam taskforce – under pressure to conduct a credible probe that nevertheless absolved top government officials – had falsely named suspects who were already deceased in order to shield the real culprits.

A competing theory, voiced by journalists covering the scandal in Bhopal, proposes that it will be all but impossible to determine whether the deaths are connected to Vyapam, because the families of many of the dead refuse to admit that their children paid money to cheat on their exams – for fear that the police might arrest the bereaved parents as well.

For India’s poor (and middle class,) scamming is a dammed if you do, dammed if you don’t affair:

“My brother was arrested four months ago for paying someone to ensure he cleared the police constable exam in 2012,” the man told me. “Some people in our village said, ‘This is Madhya Pradesh, nothing happens without money.’ My brother sold his land and paid them 600,000 rupees.”

In August that year, he was one of 403,253 people who appeared for the recruitment test to become a police constable. … Four months after his marriage, his name popped up in the scam, he lost his job and he was hauled off to prison.

“So now my brother has a wife and his first child, but no job, no land, no money, no prospects and a court case to fight,” the man said. “You can write your story, but write that this is a state of 75 million corrupt people, where there is nothing in the villages and if a man comes to the city in search of an honest day’s work, the politicians and their touts demand money and then throw him into jail for paying.”

Pay them their wages each day before sunset, because they are poor and are counting on it. Otherwise they may cry to the Lord against you, and you will be guilty of sin. -- Deuteronomy 24:15
“Pay them their wages each day before sunset, because they are poor and are counting on it. Otherwise they may cry to the Lord against you, and you will be guilty of sin.” — Deuteronomy 24:15

India is not the only place with such scandals:

China Catches 2,440 Students Cheating in High-Tech Scam

Behind Fake Degrees From Pakistan, a Maze of Deceit

2012 Harvard Cheating Scandal

Or, you know, pretty much the entire US economy, especially finance, insurance, and real estate.

I would like to note that in many of these cases, the little guys in the scam, while arguably acting dishonestly and cheating against their neighbors, are basically well-intentioned people who don’t see any other options besides bribing their way into jobs. In the end, these guys often get screwed (or end up dead.)

It’s the people who are taking the bribes and fixing the tests and creating bullshit degrees and profiting off people’s houses burning down who are getting rich off everyone else and ensuring that cheating is the only way to get ahead.

These people are parasites.

Parasitism increases complexity in the host organism, which increases complexity in the parasite in turn:

With selection, evolution can also produce more complex organisms. Complexity often arises in the co-evolution of hosts and pathogens,[7] with each side developing ever more sophisticated adaptations, such as the immune system and the many techniques pathogens have developed to evade it. For example, the parasite Trypanosoma brucei, which causes sleeping sickness, has evolved so many copies of its major surface antigen that about 10% of its genome is devoted to different versions of this one gene. This tremendous complexity allows the parasite to constantly change its surface and thus evade the immune system through antigenic variation.[8]

Animals detect and expel parasites; parasites adapt to avoid detection.

So, too, with human scams.

We tend to increase complexity by adding paperwork.

A few people cheat on their taxes, so the IRS begins randomly auditing people to make sure everyone is complying. A few people refuse to hire African Americans, so companies must keep records on the ethnic/racial identities of everyone they interview for a job. An apartment complex fears it could get sued if a car hits a bicyclist in the parking lot, so it forbids all of the children there from riding their bikes. A college gets sued after a mentally ill student commits suicide on campus, so the college starts expelling all mentally ill students.

Now, while I appreciate certain kinds of complexity (like the sort that results in me having a computer to write on and an internet to post on,) the variety that arises due to a constant war between parasites and prey doesn’t seem to have much in the way of useful side effects. Perhaps I am missing something, but it does not seems like increasing layers of oversight and bureaucracy in an attempt to cut down cheating makes the world any better–rather the opposite, in fact.

Interestingly, fevers are not diseases nor even directly caused by disease, but by your own immune system responding to disease. By increasing your internal temperature, your body aims to kill off the infection or at least make things too inhospitable for it to breed. Fevers (within a moderate range) are your friends.

They are still unpleasant and have a seriously negative effect on your ability to get anything else done.

An ill patient can do little more than lie in bed and hope for recovery; a sick society does nothing but paperwork.

Certainly the correct response to parasitism is to root it out–paperwork, fever, and all. But the long-term response should focus on restructuring institutions so they don’t become infected in the first place.

In human systems, interdependence in close-knit communities is probably the most reliable guard against fraud. You are unlikely to prosper by cheating your brother (genetically, after all, his success is also half your success,) and people who interact with you often will notice if you do not treat them fairly.

Tribal societies have plenty of problems, but at least you know everyone you’re dealing with.

Modern society, by contrast, forces people to interact with and dependent upon thousands of people they don’t know, many they’ve met only once and far more they’ve never met at all. When I sit down to dinner, I must simply trust that the food I bought at the grocery store is clean, healthy, and unadultarated; that no one has contaminated the milk, shoved downer cows into the chute, or failed to properly wash the tomatoes. When I drive I depend on other drivers to not be drunk or impaired, and upon the city to properly maintain the roads and direct traffic. When I apply for jobs I hope employers will actually read my resume and not just hire the boss’s nephew; when I go for a walk in the park, I hope that no one will mug me.

With so many anonymous or near-anonymous interactions, it is very easy for people to defraud others and then slip away, never to be seen again. A mugger melts into a crowd; the neighbor whose dog shat all over your yard moves and disappears. Twitter mobs strike out of the blue and then disperse.

So how do we get, successfully, from tight-knit tribes to million+ people societies with open markets?

How do modern countries exist at all?

I suspect that religion–Christianity in the West, probably others elsewhere–has played a major role in encouraging everyone to cooperate with their neighbors by threatening them with eternal damnation if they don’t.

To return to Deuteronomy 24:

Do not take a pair of millstones—not even the upper one—as security for a debt, because that would be taking a person’s livelihood as security.

If someone is caught kidnapping a fellow Israelite and treating or selling them as a slave, the kidnapper must die. You must purge the evil from among you. …

10 When you make a loan of any kind to your neighbor, do not go into their house to get what is offered to you as a pledge. 11 Stay outside and let the neighbor to whom you are making the loan bring the pledge out to you. 12 If the neighbor is poor, do not go to sleep with their pledge in your possession. 13 Return their cloak by sunset so that your neighbor may sleep in it. Then they will thank you, and it will be regarded as a righteous act in the sight of the Lord your God.

14 Do not take advantage of a hired worker who is poor and needy, whether that worker is a fellow Israelite or a foreigner residing in one of your towns. 15 Pay them their wages each day before sunset, because they are poor and are counting on it. Otherwise they may cry to the Lord against you, and you will be guilty of sin. …

17 Do not deprive the foreigner or the fatherless of justice, or take the cloak of the widow as a pledge. 18 Remember that you were slaves in Egypt and the Lord your God redeemed you from there. That is why I command you to do this.

To be fair, we have to credit Judaism for Deuteronomy.

Here we have organized religion attempting to bridge the gap between tribalism and universal morality. Enslaving one of your own is an offense punishable by death, but there is no command to rescue the enslaved of other nations. You must treat your own employees well, whether they come from your own tribe or other tribes.

In tribal societies, justice is run through the tribe. People with no families or clans–like orphans and foreigners–therefore cannot access the normal routes to justice.

As Peter Frost notes of the German societies of the early Dark Ages:

The new barbarian rulers also disliked the death penalty, but for different reasons. There was a strong feeling that every adult male had a right to use violence and to kill, if need be. This right was of course reciprocal. If you killed a man, his death could be avenged by his brothers and other male kinsmen. The prospect of a vendetta thus created a ‘balance of terror’ that kept violence within limits. So, initially, the barbarians allowed capital punishment only for treason, desertion, and cowardice in combat (Carbasse, 2011, p. 35). [bold mine]

Frost quotes:

[The Salic Law] is a pact (pactus) “concluded between the Franks and their chiefs,” for the specific purpose of ensuring peace among the people by “cutting short the development of brawls.” This term evidently means private acts of vengeance, the traditional vendettas that went on from generation to generation. In place of the vengeance henceforth forbidden, the law obliged the guilty party to pay the victim (or, in the case of murder, his family) compensation. This was an indemnity whose amount was very precisely set by the law, which described with much detail all of the possible damages, this being to avoid any discussion between the parties and make [murder] settlements as rapid, easy, and peaceful as possible. […] This amount was called the wergild, the “price of a man.” The victim’s family could not refuse the wergild, and once it was paid, the family had to be satisfied. They no longer had the right to avenge themselves (Carbasse, 2011, pp. 33-34).

(The Wikipedia notes that, “The same concept outside Germanic culture is known as blood money. Words include ericfine in Ireland, galanas in Wales, veriraha in Finnish, vira (“вира“) in Russia and główszczyzna in Poland. In the Arab world, the very similar institution of diyya is maintained into the present day.”)

Frost continues:

This situation began to change in the 12th century. One reason was that the State had become stronger. But there also had been an ideological change. The State no longer saw itself as an honest broker for violent disputes that did not challenge its existence. Jurists were now arguing that the king must punish the wicked to ensure that the good may live in peace.

In a tribal system, a victim with no family has no one to bring a suit on their behalf, if they are murdered, there is no one to pay weregild to. This leaves orphans and “foreigners” without any access to justice.

Thus Deuteronomy’s command not to mistreat them (or widows.) They aren’t protected under tribal law, but they are under Yahweh’s.

The threat of divine punishment (and promise of rewards for good behavior,) may have encouraged early Christians to cooperate with strangers. People who would cheat others now have both their own consciences and the moral standards of their Christian neighbors to answer to. The ability to do business with people outside of one’s own family or clan without constant fear of getting ripped off is a necessary prerequisite for the development of free markets, modern economies, and million+ nations. (In short, universalism.)

In the absence of universalist societies that effectively discourage cheating, groups that protect their own will out-compete groups that do not. The Amish, for example, have grown from 5,000 to 300,000 people over the past century (despite significant numbers of Amish children choosing to leave the society every generation.)

(By contrast, my own family has largely failed to reproduce itself–my cousins are all childless, and I have no second cousins.)

The Amish avoid outsiders, keeping their wealth within their own communities. This probably also allows them to steer clear of cheaters and scammers (unlike everyone who lost money in the 2008 housing crash or the 2001 stock market crash.) As insular groups go, the Amish don’t seem too bad–I haven’t heard any reports of them stealing people’s chickens or scamming elderly widows out of their life’s savings.

Homeostasis, personality, and life (part 2)

Warning: This post may get a little fuzzy, due to discussion of things like personality, psychology, and philosophy.

Yesterday we discussed homeostatic systems for normal organism/organization maintenance and defense, as well as pathological malfunctions of over or under-response from the homeostatic systems.

But humans are not mere action-reaction systems; they have qualia, an inner experience of being.

One of my themes here is the idea that various psychological traits, like anxiety, guilt, depression, or disgust, might not be just random things we feel, but exist for evolutionary reasons. Each of these emotions, when experienced moderately, may have beneficial effects. Guilt (and its cousin, shame,) helps us maintain our social relationships with other people, aiding in the maintenance of large societies. Disgust protects us from disease and helps direct sexual interest at one’s spouse, rather than random people. Anxiety helps people pay attention to crucial, important details, and mild depression may help people concentrate, stay out of trouble, or–very speculatively–have helped our ancestors hibernate during the winter.

In excess, each of these traits is damaging, but a shortage of each trait may also be harmful.

I have commented before on the remarkable statistic that 25% of women are on anti-depressants, and if we exclude women over 60 (and below 20,) the number of women with an “anxiety disorder” jumps over 30%.

The idea that a full quarter of us are actually mentally ill is simply staggering. I see three potential causes for the statistic:

  1. Doctors prescribe anti-depressants willy-nilly to everyone who asks, whether they’re actually depressed or not;
  2. Something about modern life is making people especially depressed and anxious;
  3. Mental illnesses are side effects of common, beneficial conditions (similar to how sickle cell anemia is a side effect of protection from malaria.)

As you probably already know, sickle cell anemia is a genetic mutation that protects carriers from malaria. Imagine a population where 100% of people are sickle cell carriers–that is, they have one mutated gene, and one regular gene. The next generation in this population will be roughly 25% people who have two regular genes (and so die of malaria,) 50% of people who have one sickle cell and one regular gene (and so are protected,) and 25% of people will have two sickle cell genes and so die of sickle cell anemia. (I’m sure this is a very simplified scenario.)

So I consider it technically possible for 25% of people to suffer a pathological genetic condition, but unlikely–malaria is a particularly ruthless killer compared to being too cheerful.

Skipping to the point, I think there’s a little of all three going on. Each of us probably has some kind of personality “set point” that is basically determined by some combination of genetics, environmental assaults, and childhood experiences. People deviate from their set points due to random stuff that happens in their lives, (job promotions, visits from friends, car accidents, etc.,) but the way they respond to adversity and the mood they tend to return to afterwards is largely determined by their “set point.” This is all a fancy way of saying that people have personalities.

The influence of random chance on these genetic/environmental factors suggests that there should be variation in people’s emotional set points–we should see that some people are more prone to anxiety, some less prone, and some of average anxiousness.

Please note that this is a statistical should, in the same sense that, “If people are exposed to asbestos, some of them should get cancer,” not a moral should, as in, “If someone gives you a gift, you should send a thank-you note.”

Natural variation in a trait does not automatically imply pathology, but being more anxious or depressive or guilt-ridden than others can be highly unpleasant. I see nothing wrong, a priori, with people doing things that make their lives more pleasant and manageable (and don’t hurt others); this is, after all, why I enjoy a cup of coffee every morning. If you are a better, happier, more productive person with medication (or without it,) then carry on; this post is not intended as a critique of anyone’s personal mental health management, nor a suggestion for how to take care of your mental health.

Our medical/psychological health system, however, operates on the assumption that medications are for pathologies only. There is not form to fill out that says, “Patient would like anti-anxiety drugs in order to live a fuller, more productive life.”

That said, all of these emotions are obviously responses to actual stuff that happens in real life, and if 25% of women are coming down with depression or anxiety disorders, I think we should critically examine whether anxiety and depression are really the disease we need to be treating, or the body’s responses to some external threat.

I am reminded here of Peter Frost’s On the Adaptive Value of “Aw Shucks:

In a mixed group, women become quieter, less assertive, and more compliant. This deference is shown only to men and not to other women in the group. A related phenomenon is the sex gap in self-esteem: women tend to feel less self-esteem in all social settings. The gap begins at puberty and is greatest in the 15-18 age range (Hopcroft, 2009).

If more women enter the workforce–either because they think they ought to or because circumstances force them to–and the workforce triggers depression, then as the percent of women formally employed goes up, we should see a parallel rise in mental illness rates among women. Just as Adderal and Ritalin help little boys conform to the requirements of modern classrooms, Prozac and Lithium help women cope with the stress of employment.

As we discussed yesterday, fever is not a disease, but part of your body’s system for re-asserting homeostasis by killing disease microbes and making it more difficult for them to reproduce. Extreme fevers are an over-reaction and can kill you, but a normal fever below 104 degrees or so is merely unpleasant and should be allowed to do its work of making you better. Treating a normal fever (trying to lower it) interferes with the body’s ability to fight the disease and results in longer sicknesses.

Likewise, these sorts of emotions, while definitely unpleasant, may serve some real purpose.

We humans are social beings (and political animals.) We do not exist on our own; historically, loneliness was not merely unpleasant, but a death sentence. Humans everywhere live in communities and depend on each other for survival. Without refrigeration or modern storage methods, saving food was difficult. (Unless you were an Eskimo.) If you managed to kill a deer while on your own, chances are you couldn’t eat it all before it began to rot, and then your chances of killing another deer before you started getting seriously hungry were low. But if you share your deer with your tribesmates, none of the deer goes to waste, and if they share their deer with yours, you are far less likely to go hungry.

If you end up alienated from the rest of your tribe, there’s a good chance you’ll die. It doesn’t matter if they were wrong and you were right; it doesn’t matter if they were jerks and you were the nicest person ever. If you can’t depend on them for food (and mates!) you’re dead. This is when your emotions kick in.

People complain a lot that emotions are irrational. Yes, they are. They’re probably supposed to be. There is nothing “logical” or “rational” about feeling bad because someone is mad at you over something they did wrong! And yet it happens. Not because it is logical, but because being part of the tribe is more important than who did what to whom. Your emotions exist to keep you alive, not to prove rightness or wrongness.

This is, of course, an oversimplification. Men and women have been subject to different evolutionary pressures, for example. But this is close enough for the purposes of the current conversation.

If modern people are coming down with mental illnesses at astonishing rates, then maybe there is something about modern life that is making people ill. If so, treating the symptoms may make life more bearable for people while they are subject to the disease, but still does not fundamentally address whatever it is that is making them sick in the first place.

It is my own opinion that modern life is pathological, not (in most cases,) people’s reactions to it. Modern life is pathological because it is new and therefore you aren’t adapted to it. Your ancestors have probably only lived in cities of millions of people for a few generations at most (chances are good that at least one of your great-grandparents was a farmer, if not all of them.) Naturescapes are calming and peaceful; cities noisy, crowded, and full of pollution. There is some reason why schizophrenics are found in cities and not on farms. This doesn’t mean that we should just throw out cities, but it does mean we should be thoughtful about them and their effects.

People seem to do best, emotionally, when they have the support of their kin, some degree of ethnic or national pride, economic and physical security, attend religious services, and avoid crowded cities. (Here I am, an atheist, recommending church for people.) The knowledge you are at peace with your tribe and your tribe has your back seems almost entirely absent from most people’s modern lives; instead, people are increasingly pushed into environments where they have no tribe and most people they encounter in daily life have no connection to them. Indeed, tribalism and city living don’t seem to get along very well.

To return to healthy lives, we may need to re-think the details of modernity.

Politics

Philosophically and politically, I am a great believer in moderation and virtue as the ethical, conscious application of homeostatic systems to the self and to organizations that exist for the sake of humans. Please understand that this is not moderation in the conventional sense of “sometimes I like the Republicans and sometimes I like the Democrats,” but the self-moderation necessary for bodily homeostasis reflected at the social/organizational/national level.

For example, I have posted a bit on the dangers of mass immigration, but this is not a call to close the borders and allow no one in. Rather, I suspect that there is an optimal amount–and kind–of immigration that benefits a community (and this optimal quantity will depend on various features of the community itself, like size and resources.) Thus, each community should aim for its optimal level. But since virtually no one–certainly no one in a position of influence–advocates for zero immigration, I don’t devote much time to writing against it; it is only mass immigration that is getting pushed on us, and thus mass immigration that I respond to.

Similarly, there is probably an optimal level of communal genetic diversity. Too low, and inbreeding results. Too high, and fetuses miscarry due to incompatible genes. (Rh- mothers have difficulty carrying Rh+ fetuses, for example, because their immune systems identify the fetus’s blood as foreign and therefore attack it, killing the fetus.) As in agriculture, monocultures are at great risk of getting wiped out by disease; genetic heterogeneity helps ensure that some members of a population can survive a plague. Homogeneity helps people get along with their neighbors, but too much may lead to everyone thinking through problems in similar ways. New ideas and novel ways of attacking problems often come from people who are outliers in some way, including genetics.

There is a lot of talk ’round these parts that basically blames all the crimes of modern civilization on females. Obviously I have a certain bias against such arguments–I of course prefer to believe that women are superbly competent at all things, though I do not wish to stake the functioning of civilization on that assumption. If women are good at math, they will do math; if they are good at leading, they will lead. A society that tries to force women into professions they are not inclined to is out of kilter; likewise, so is a society where women are forced out of fields they are good at. Ultimately, I care about my doctor’s competence, not their gender.

In a properly balanced society, male and female personalities complement each other, contributing to the group’s long-term survival.

Women are not accidents of nature; they are as they are because their personalities succeeded where women with different personalities did not. Women have a strong urge to be compassionate and nurturing toward others, maintain social relations, and care for those in need of help. These instincts have, for thousands of years, helped keep their families alive.

When the masculine element becomes too strong, society becomes too aggressive. Crime goes up; unwinable wars are waged; people are left to die. When the feminine element becomes too strong, society becomes too passive; invasions go unresisted; welfare spending becomes unsustainable. Society can’t solve this problem by continuing to give both sides everything they want, (this is likely to be economically disastrous,) but must actually find a way to direct them and curb their excesses.

I remember an article on the now-defunct neuropolitics (now that I think of it, the Wayback Machine probably has it somewhere,) on an experiment where groups with varying numbers of ‘liberals” and “conservatives” had to work together to accomplish tasks. The “conservatives” tended to solve their problems by creating hierarchies that organized their labor, with the leader/s giving everyone specific tasks. The “liberals” solved their problems by incorporating new members until they had enough people to solve specific tasks. The groups that performed best, overall, were those that had a mix of ideologies, allowing them to both make hierarchical structures to organize their labor and incorporate new members when needed. I don’t remember much else of the article, nor did I read the original study, so I don’t know what exactly the tasks were, or how reliable this study really was, but the basic idea of it is appealing: organize when necessary; form alliances when necessary. A good leader recognizes the skills of different people in their group and uses their authority to direct the best use of these skills.

Our current society greatly lacks in this kind of coherent, organizing direction. Most communities have very little in the way of leadership–moral, spiritual, philosophical, or material–and our society seems constantly intent on attacking and tearing down any kind of hierarchies, even those based on pure skill and competence. Likewise, much of what passes for “leadership” is people demanding that you do what they say, not demonstrating any kind of competence. But when we do find competent leaders, we would do well to let them lead.

Back to part one.

The Homeostasis theory of disease, personality, and life

Disease is the enemy of civilization. Wherever civilization arises, so does disease; many of our greatest triumphs have been the defeat of disease.

Homeostasis is the idea that certain systems are designed to self-correct when things go wrong–for example, when you get hot, you sweat; when you get cold, you shiver. Both actions represent your body’s natural, automatic process for keeping your body temperature within a proper range.

All living things are homeostatic systems, otherwise they could not control the effects of entropy and would fall apart. (When this happens, we call it death):

from Life is a Braid in Spacetime by Max Tegmark, Illustration by Chad Hagen
from Life is a Braid in Spacetime by Max Tegmark, Illustration by Chad Hagen

Non-living things, like robots and corporations, can also be homeostatic–by hiring new employees when old ones leave, or correcting themselves when they start to fall:

Like organisms, organizations that are not homeostatic will tend to fall apart.

For this post, we will consider four important forms of homeostasis:

  1. Normal homeostasis: the normal feedback loops that keep the body (or organization) in its normal state under normal conditions.
  2. Defensive homeostasis: feedback loops that are activated to defend the body against severe harm, such as disease, and reassert normal homeostasis.
  3. Inadequate homeostasis: a body that cannot maintain or reassert normal homeostasis.
  4. Over-aggressive homeostasis: an excessive defensive response that harms the self.

Normal Homeostasis

Normal homeostasis creates (and depends on) moderate, temperate behavior. Mundanely, when you have not eaten in a while, you grow hungry and so eat; when you have had enough, you feel satiated and so cease. When you have not slept in a long while, you grow tired and head to bed; when you have slept enough, you wake.

Obesity and starvation are both symptoms of normal homeostasis not operating as it should. They can be caused by environmental disorder (eg, crop failures,)  or internal disorders, (pituitary tumors can cause weight gain,) or even just the individual’s psyche (stress renders some people unable to eat, while others cope with chocolate.)

If your body is forced out of its normal homeostatic rhythms, things begin to degenerate. After too long without sleep, (perhaps due to too many final exams, an all-night TV binge, or too many 5-hour energy drinks,) your body loses its ability to thermo-regulate; the hungry, cold, and malnourished lose their ability to fend off disease and succumb to pneumonia. Even something as obviously beneficial as hygiene can go too far–too much washing deprives the skin of its natural, protective layer of oils and beneficial microbes, leaving it open to invasion and colonization by other, less friendly microbes, like skin-eating fungi. Most of this seems obvious, but it took people a rather long time to figure out things like, “eating a 100% corn diet is bad for you.”

A body that is not in tune quickly degrades and becomes easy prey to sickness and disease; thus moderation is upheld as a great virtue and excesses as vice. A body that is properly in tune–balanced in diet, temperate in consumption, given enough exercise and rest, and nourished socially and morally–is a body that is strong, healthy, and able to deal with most of life’s vicissitudes.

(Gut bacteria are an interesting case of normal homeostasis in action. Antibiotics, while obviously beneficial in many cases, also kill much of the body’s natural gut bacteria, leading to a variety of unpleasant side effects [mostly diarrhea,] showing that too little gut bacteria is problematic. But the idea that our gut bacteria are entirely harmless is probably an over-simplification; while being effectively “along for the ride” means that their interests align roughly with ours, that is no guarantee that they will always be well-behaved. Too much gut bacteria may also be a problem. One theory I have read on why people need to sleep–and why we feel cruddy when we haven’t slept–is that our gut bacteria tend to be active during the day, which produces waste, and the buildup of bacterial waste in your bloodstream makes you feel bad. While you sleep, your body temperature drops, slowing down the bacteria and giving you a chance to clean out your systems.)

The homeostasis theory of disease–the idea that an unbalanced body loses its ability to fend off diseases and so becomes ill–should not be seen as competing with the Germ Theory of Disease, but complementing it. Intellectually, HTD has been around for a long time, informing the Greek medical treatises on the “four humours,” traditional Chinese medical ideas of the effects of “hot” and “cold” food, the general principle of Yin and Yang, many primitive notions of magic, and modern notions about probiotics. HTD has led to some obviously (in retrospect) bad ideas, like bleeding patients or eating things that aren’t particularly non-toxic. But it has also led to plenty of decent ideas, like that you should eat a “balanced” diet, enjoy life’s pleasures in moderation, or that cholera sufferers should be given lots of water.

Defensive Homeostasis

Defensive homeostasis is an extreme version of normal homeostasis. Your body is always defending itself against pathogens and injuries, but some assaults are more noticeable than others.

One of the most miserable sicknesses I have endured happened after eating raw vegetables while on vacation; I had washed them, but obviously not enough. Not only my stomach hurt, but every part of me; even my skin hurt. My body, reasoning that something was deeply wrong, did its very mighty best to eliminate any ingested toxins by every route available, profuse sweat and tears included.

Luckily, it was all over by morning, and I was left with a deep gratitude toward my body for the steps it had taken–however extreme–to make me well again.

it is important to distinguish between the effects of sickness and the effects of the homeostatic system attempting to cure itself. This is a crucial mistake people make all the time. In my case, the sickness made me feel ill by flooding my body with pathogens and their resultant toxins. The vomiting felt awful, but the vomiting was not the sickness; vomiting was my body’s attempt to rid itself of  the pathogens. Taking steps to prevent the vomiting, say, by taking an anti-nausea medication, would have let the pathogens remain inside of me, doing more harm.

(Of course, it is crucial to make sure that a vomiting person does not become dehydrated.)

To use a more general example, fevers are your body’s way of killing viruses and slowing their reproduction–just as we kill microbes by cooking our food. Fevers feel unpleasant, but they are not diseases. Using medication to lower mild fevers may actually increase [PDF] mortality by interfering with the body’s ability to kill the disease. Quoting from the PDF:

“…children with chickenpox who are treated with acetaminophen have been shown to have a longer time to total crusting of lesions than do placebo-treated control subjects [15]. In addition, adults with rhinovirus infections exhibit a longer duration of viral shedding and increased nasal signs and symptoms when treated with antipyretic medications [16].”

Additionally, artificially depressing how sick you feel increases the likelihood of getting out of bed and moving around, which in turn increases the likelihood of spreading your sickness to other people.

Fevers of 105 degrees F or above are excessive and do have the potential to harm you, and should be treated. But a fever of 102 should be allowed to do its work.

Likewise, in the case of cholera, the most effective treatment is to keep the sufferer hydrated (or re-hydrate them) until their body can wipe out the disease. (Cholera basically makes you lose all of your bodily fluids and die of dehydration.) It is easy to underestimate just how much water the sufferer has lost; according to Wikipedia, “Ten percent of a person’s body weight in fluid may need to be given in the first two to four hours.[12]” Keep in mind the need to replenish potassium levels while you re-hydrate; if you don’t have any special re-hydration drinks, you can just boil 1 liter of water  and add 1/2 teaspoon of salt, 6 teaspoons of sugar, and 1 mashed banana; in a pinch, probably any clean beverage is better than nothing. Untreated, 50-90% of cholera victims die; with rehydration, the death rate amazingly drops below 1%:

“In untreated cases the death rate is high, averaging 50%, and as high as 90% in epidemics, but with effective treatment the death rate is less than 1%. The intravenous and oral replacement of body fluids and essential electrolytes and the restoration of kidney function are more important in therapy than the administration of antibacterial drugs.”

This is super important, so I’m going to repeat it: Don’t confuse the effects of sickness and the effects of the homeostatic system attempting to cure itself. This goes for organizations and societies, too.

Unfortunately, much of our economic theory is not based on the idea that societies–or the Earth–trend toward homeostasis, but on the assumption of infinite growth. The economic proponents of open borders, for example, basically seem to think that there are no theoretical limits to the number of people who can move to Europe and the US and take up a Western lifestyle.

Pension plans (and Social Security) were also designed with infinite growth in mind. Now that TFRs have dropped below replacement across the developed world, many countries are faced with the horrifying prospect that old people may not be able to depend on the incomes of children they didn’t create for their retirement. I suppose the solution to such a problem is that you only let people with 3+ children have pensions, or design a pension system that doesn’t require a never-ending process of population expansion, because the planet cannot hold infinite numbers of people.

Declining TFR is not a disease, it is a symptom, most likely of countries where ordinary people struggle to afford children. The fertility rate will pick back up once the population has shrunk enough that there are enough resources per person–including space–to make having children an attractive option.

But to those obsessively focused on their unsustainable pensions, low TFR is a disease, and it has to be fixed by bringing in more people, preferably people who will have lots of children.

They actually hire people to shove passengers into the trains to make them fit.
“Japan must import more people!” the NY Times constantly screams. “They don’t have enough to fill the pensions!”

Just as treating a fever inhibits your body’s ability to fight the real disease, so importing people to combat a low TFR inhibits your country’s ability to return to a proper ratio of resource to people, making the problem much, much worse.

Remember these graphs?

600px-Homicide_rates1900-2001    chart-01       Picture 20   Picture 21

Mass immigration => bigger labor market => lower wages => lower TFR => underfunded pensions => demands for more immigrants.

Inadequate Homeostasis

Inadequate and over-active homeostastic systems are pathologic conditions rendering the self unable to respond appropriately to changing conditions in order to reassert normal homeostasis. For example, people with a certain mutation in the ITPR2 gene cannot sweat, increasing their chances of dangerously overheating. People with AIDS, of course, have deficient immune systems, because the virus specifically attacks immune cells.

Inability to maintain or reassert homeostasis in biological systems is most likely a result of damage due to mutation or infection. In a non-organism, it is more likely a result of the organization or entity just having been created with inadequate homeostatic systems.

A mundane example is a city that has expanded and so can no longer handle the amount of traffic, trash, and rainwater run-off it produces. The original systems, such as sewers, roads, and trash collection, could handle the city’s normal variations back when they were designed, but no longer. Traffic jams, flooding, and giant piles of trash ensue.

At this point, a city has two choices: increase systemic complexity (ie, upgrade the infrastructure,) or decrease the amount of waste it produces by people dying/moving away.

Here’s a graph of the historical population of Rome:

Population_of_Rome

Rome had obviously been in decline since around 100 AD, probably due to the Antonin Plague–most plagues are, of course, homeostasis violently reasserting itself as a result of human societies becoming too big for their hygiene systems. In the 400s, the Roman empire collapsed, leading to sieges, famines, and violent barbarian invasions and an end to tax revenues and supply networks that had formerly supported the city.

By 752, Rome had dropped from 1.65 million people to 40,000 people, but the city reached its true nadir in 1347, when plague reduced the population to 17,000, which is even lower than the estimates for 800 BC. Rome would not return to its previous high until 1850, though if I know anything about near-vertical lines on graphs, it’s that they don’t go up forever. When the collapse begins again, I wonder if the city will return to its 1000s population, or stabilize at some new level.

I’ve spoken before of La Griffe du Lion‘s Smart Fraction Theory, which posits that a country’s GDP correlates with the percent of its population with (verbal) IQs over 120. These are the people who can plan and maintain complex systems. This suggests that, unless IQs increase over time, counties may have a natural limit complexity limit they can’t pass, (but many countries may not be operating at their complexity limits.)

A different kind of inadequate homeostasis is Mission creep, when organizations start seeing it as their job to do more and more things not within their original mandate, as when the Sierra Club starts championing SJW causes; in these cases, the organization lacks proper feedback mechanisms to keep itself on-task. Eventually, like MTV, the organization loses sight entirely of its original purpose (though to be fair, MTV still exists, so it’s strategy hasn’t been unsuccessful.)

Over-Active Homeostasis

Allergies and auto-immune disorders are classic examples of over-active homeostatic systems. Allergies happen when the body responds to normal stimuli like pollen or food as though they were pathogens; auto-immune disorders involve the immune system accidentally attacking the body’s own cells instead of pathogens.

At a higher level, some people respond with violent aggression to minor annoyances; some countries start disastrous wars against countries they can’t conquer, others attack their own citizens and destroy their own homeostatic systems.

Millions of years of evolution have equipped our bodies with self-correcting systems to keep us functioning, so that human pathologies are relatively easy to identify. Organizations, however, have endured far fewer years of evolutionary pressure, so their homeostatic systems are much cruder and more likely to fail. We can understand biological pathologies fairly well, but often fail to identify organizational pathologies entirely; even when we do have some sense* that things are definitely wrong, it’s hard to say exactly what, much less identify a coherent plan to fix it and then convince other people to actually do it.

*or perhaps in your case, dear reader, a definite sense

For organizations to continue working, they need adequate homeostatic systems to keep them on track and prevent both under and over reactions. The US Constitution, for example, establishes a system of “checks and balances” and “separate powers”  mandated to the executive, legislative, and judicial branches, not to mention federal, state, and individual levels (via voting and citizen juries.) For all its flaws, this system has managed to basically keep going for over 200 years, making it one of the oldest systems of continuous governance in the world, (most of the world’s governments were established following the breakup of colonial empires and the Soviet Union), but these system probably needs revision over time to keep it functioning. (We can further discuss a variety of ways to keep systems functional elsewhere, but Slate Star Codex’s post on Why don’t Whales get Cancer? [basically, the theory is that whales are so big that their cancers get cancer and kill themselves before they kill the whale] seems relevant.)

All human civilization depends on homeostatic systems to keep everyone in them alive. We may think of civilization as order, but it is not perfect order. Perfect order is a crystal; perfect order is absolute zero. It is not alive; it does not change, move, or adapt. Life is a braid in spacetime; civilization is homeostatic.

 

Part two: homeostasis and personality.

Prohibition part 2: Beer, Cholera, and Public Health

Part 1: Did European Filthiness lead to Prohibition?

So why were the immigrants drinking so much?

Simply put, European cities prior to the installation of underground sewers and water purification plants were disgusting, filth-ridden cesspools where the average citizen stood an astronomical chance of being felled by fecal-born diseases. How the cities got to be so revolting is beyond me–it may just be a side effect of living in any kind of city before the invention of effective sewers. Nevertheless, European city dwellers drank their own feces until everyone started catching cholera. (Not to mention E. coli, smallpox, syphilis, typhus, tuberculosis, measles, dysentery, Bubonic Plague, gonorrhea, leprosy, malaria, etc.)

The average superstitious “primitive” knows that dead bodies contain mystical evil contamination properties, and that touching rotting carcasses can infect you with magical death particles that will then kill you (or if you are a witch, your intended victims,) but Europeans were too smart for such nonsense; Ignaz Semmelweis, the guy who insisted that doctors were killing mothers by infecting them with corpse particles by not washing their hands between autopsying dead bodies and delivering babies, was hauled off to an insane asylum and immediately stomped to death by the guards.

The women, of course, had figured out that some hospitals murdered their patients and some hospitals did not; the women begged not to be sent to the patient-murdering hospitals, but such opinions were, again, mere superstitions that the educated classes knew to ignore.

It is amazing what man finds himself suddenly unable to comprehend so long as his incomprehension is necessary for making money, whether it be the amount of food necessary to prevent a child from starving or that you should not wallow in feces.

Forgive me my vitriol, but there are few things I hate worse than disease, and those who willfully spread death and suffering should be dragged into the desert and shot.

Cleanliness is next to Godliness.

Anyway, back to our story. The much-beleagured “Dark Ages” of Medieval Europe was actually a time of relatively few diseases, just because the population was too low for much major disease transmission, but as the trade routes expanded and cities grew, epidemic after epidemic swept the continent. The Black Death came in 1346, carrying off 75 to 200 million people, or 30-60% of the population. According to the Wikipedia, “Before 1350, there were about 170,000 settlements in Germany, and this was reduced by nearly 40,000 by 1450.” The Black Plague would not disappear from Europe until the 1700s, though it returned again around 1900–infecting San Francisco at the same time–in the little known “Third Plague” outbreak that killed approximately 15 million people, (most of them in India and China,) and officially ended in 1959.

(BTW, rodents throughout much of the world, including America, still harbor plague-bearing fleas which do actually still give people the plague, so be cautious about contact with wild rodents or their carcases, and if you think you have been infected, get to a hospital immediately because modern medicine can generally cure it.)

Toward the end of the 1700s, smallpox killed about 400,000 Europeans per year, wiping out 20-60% of those it infected.

Cholera spreads via the contamination of drinking water with cholera-laden diarrhea. Prevention is simple: don’t shit in the drinking water. If you can’t convince people not to shit in the water supply, then boil, chlorinate, sterilize, filter, or do whatever it takes to get your water clean.

In 1832, Cholera struck the UK, killing 53,000 people; France lost 100,000. In 1854, epidemiologist John Snow risked his life to track the cholera outbreak in Soho, London. His work resulted in one of history’s most important maps:

Snow-cholera-map-1

Each black line represents a death from cholera.

The medical profession of Snow’s day believed that cholera was spread through bad air–miasmas–and that Snow was a madman for being anywhere near air breathed out by cholera sufferers. Snow’s map not only showed that the outbreak was concentrated around one water source, (the PUMP in the center of the map,) but also showed one building on Broad street that had been mysteriously spared the contagion, suffering zero deaths: the brewery.

The monks of the brewery did not drink unadulterated water from the pump; they were drinking beer, breakfast, lunch, and dinner. Drinking nothing but beer might sound like a bad strategy, especially if you need to drive anywhere, but beer has a definite advantage over water: fermentation kills pathogens.

It wasn’t until 1866 that the establishment finally started admitting the unpleasant truth that people were catching cholera because they were drinking poop water, but since then, John Snow’s work has saved the lives of millions of people.

Good luck finding anyone who remembers Snow’s name today–much less Semmelweis’s–but virtually every school child in America knows about Amelia Earhart, a woman who’s claim to fame is that she failed to cross the Pacific Ocean in a plane. (Sorry, I was looking at children’s biographies today, and Amelia Earhart remains one of my pet peeves in the category of “Why would I try to inspire girls via failure?”)

But that is all beside the point, which is simply that Europeans who drank lots of beer lived, while Europeans who drank water died. This is the sort of thing that can exert a pretty strong selective pressure on people to drink lots of beer.

Meanwhile, Back in America…

While Americans were not immune to European diseases, lower population density made it harder for epidemics to spread. The same plague that killed 13 million people in China and India killed a mere couple hundred in San Francisco, and appears to have never killed significant numbers in other states.

Low population density meant, among other things, far less excrement in the water. American water was probably far less contaminated than European water, and so Americans had undergone much less selective pressure to drink nothing but beer.

Many American religious groups took a dim view of alcohol. The Puritans did not ban alcohol, but believed it should be drunk in moderation and looked down on drunkenness. The Methodists, another Protestant group that broke away from the Anglican Church in the late 1700s and spread swiftly in America, were against alcohol from their start. Methodist ministers were to drink chiefly water, and by the mid-1880s, they were using “unfermented wine” for their sacraments. The Presbyterians began spreading the anti-alcohol message during the Second Great Awakening, and by 1879, Catherine Booth, co-founder of the Salvation Army, claimed that in America, “almost every [Protestant] Christian minister has become an abstainer.” (source) Even today, many Southern Baptists, Mormons, and Seventh Day Adventists abstain entirely from alcohol, the Mormons apparently going so far as to use water instead of wine in their sacraments.

Temperance movements also existed in Europe and other European colonies, but never reached the same heights as they did in the US. Simply put, where the water was bad, poor people could not afford to drink non-fermented beverages. Where the water was pure, people could claim drinking it a necessary piece of salvation.

As American cities filled with poor, desperate foreigners fleeing the famine and filth of Europe, their penchant for violent outbursts following over-indulgence in alcohol was not lost on their new neighbors, and so Prohibition’s coalition began to form: women, who were most often on the receiving end of drunken violence; the Ku Klux Klan, which had it out for foreigners generally and Papists especially; and the Protestant ministers, who were opposed to both alcoholism and Papism.

The Germans were never considered as problematic as the Irish, being more likely to be employed and less likely to be engaged in drunken crime, but they held themselves apart from the rest of society, living in their own communities, joining German-specific social clubs, and still speaking German instead of English, which did not necessarily endear them to their neighbors.

Prohibition was opposed primarily by wealthy Germans, (especially the brewers among them;) Episcopalians, (who were afraid their sacramental wine would be banned;) and Catholics. The breweries also campaigned against Women’s Suffrage, on the grounds that pretty much all of the suffragettes were calling for Prohibition.

WWI broke the German community by making it suddenly a very bad idea to be publicly German, and people decided that using American grain to brew German beer instead of sending that grain to feed the fighting men on the front lines was very unpatriotic indeed. President Wilson championed the income tax, which allowed the Federal Government to run off something other than alcohol taxes, women received the right to vote, and Prohibition became the law of the land–at least until 1933, when everyone decided it just wasn’t working out so well.

But by that time, the drinking water problem had been mostly worked out, so people at least had a choice of beverages they could safely and legally imbibe.

Part 1 is here.