Does the DSM need to be re-written?

I recently came across an interesting paper that looked at the likelihood that a person, once diagnosed with one mental disorder, would be diagnosed with another. (Exploring Comorbidity Within Mental Disorders Among a Danish National Population, by Oleguer Plana-Ripoll.)

This was a remarkable study in two ways. First, it had a sample size of 5,940,778, followed up for 83.9 million person-years–basically, the entire population of Denmark over 15 years. (Big Data indeed.)

Second, it found that for virtually every disorder, one diagnoses increased your chances of being diagnosed with a second disorder. (“Comorbid” is a fancy word for “two diseases or conditions occurring together,” not “dying at the same time.”) Some diseases were particularly likely to co-occur–in particular, people diagnosed with “mood disorders” had a 30% chance of also being diagnosed with “neurotic disorders” during the 15 years covered by the study.

Mood disorders includes bipolar, depression, and SAD;

Neurotic disorders include anxieties, phobias, and OCD.

Those chances were considerably higher for people diagnosed at younger ages, and decreased significantly for the elderly–those diagnosed with mood disorders before the age of 20 had a +40% chance of also being diagnosed with a neurotic disorder, while those diagnosed after 80 had only a 5% chance.

I don’t find this terribly surprising, since I know someone with at least five different psychological diagnoses, (nor is it surprising that many people with “intellectual disabilities” also have “developmental disorders”) but it’s interesting just how pervasive comorbidity is across conditions that are ostensibly separate diseases.

This suggests to me that either many people are being mis-diagnosed (perhaps diagnosis itself is very difficult,) or what look like separate disorders are often actually one, single disorder. While it is certainly possible, of course, for someone to have both a phobia of snakes and seasonal affective disorder, the person I know with five diagnoses most likely has only one “true” disorder that has just been diagnosed and treated differently by different clinicians. It seems likely that some people’s depression also manifests itself as deep-rooted anxiety or phobias, for example.

While this is a bit of a blow for many psychiatric diagnoses, (and I am quite certain that many diagnostic categories will need a fair amount of revision before all is said and done,) autism recently got a validity boost–How brain scans can diagnose Autism with 97% accuracy.

The title is overselling it, but it’s interesting anyway:

Lead study author Marcel Just, PhD, professor of psychology and director of the Center for Cognitive Brain Imaging at Carnegie Mellon University, and his team performed fMRI scans on 17 young adults with high-functioning autism and 17 people without autism while they thought about a range of different social interactions, like “hug,” “humiliate,” “kick” and “adore.” The researchers used machine-learning techniques to measure the activation in 135 tiny pieces of the brain, each the size of a peppercorn, and analyzed how the activation levels formed a pattern. …

So great was the difference between the two groups that the researchers could identify whether a brain was autistic or neurotypical in 33 out of 34 of the participants—that’s 97% accuracy—just by looking at a certain fMRI activation pattern. “There was an area associated with the representation of self that did not activate in people with autism,” Just says. “When they thought about hugging or adoring or persuading or hating, they thought about it like somebody watching a play or reading a dictionary definition. They didn’t think of it as it applied to them.” This suggests that in autism, the representation of the self is altered, which researchers have known for many years, Just says.

N=34 is not quite as impressive as N=Denmark, but it’s a good start.

Advertisements

Book Club: The 10,000 Year Explosion pt. 6: Expansion

5172bf1dp2bnl-_sx323_bo1204203200_

Welcome back to the Book Club. Today we’re discussing chapter 6 of Cochran and Harpending’s The 10,000 Year Explosion: Expansions

The general assumption is that the winning advantage is cultural–that is to say, learned. Weapons, tactics, political organization, methods of agriculture: all is learned. The expansion of modern humans is the exception to the rule–most observers suspect that biological difference were the root cause of their advantage. … 

the assumption that more recent expansions are all driven by cultural factors is based on the notion that modern humans everywhere have essentially the same abilities. that’s a logical consequence of human evolutionary stasis” If humans have not undergone a significant amount of biological change since the expansion out of Africa, then people everywhere would have essentially the same potentials, and no group would have a biological advantage over its neighbors. But as we never tire of pointing out, there has been significant biological change during that period.

I remember a paper I wrote years ago (long before this blog) on South Korea’s meteoric economic rise. In those days you had to actually go to the library to do research, not just futz around on Wikipedia. My memory says the stacks were dimly lit, though that is probably just some romanticizing. 

I poured through volumes on 5 year economic plans, trying to figure out why South Korea’s were more successful than other nations’. Nothing stood out to me. Why this plan and not this plan? Did 5 or 10 years matter? 

I don’t remember what I eventually concluded, but it was probably something along the lines of “South Korea made good plans that worked.” 

People around these parts often criticize Jared Diamond for invoking environmental explanations while ignoring or directly counter-signaling their evolutionary implications, but Diamond was basically the first author I read who said anything that even remotely began to explain why some countries succeeded and others failed. 

Environment matters. Resources matter. Some peoples have long histories of civilization, others don’t. Korea has a decently long history. 

Diamond was one of many authors who broke me out of the habit of only looking at explicit things done by explicitly recognized governments, and at wider patterns of culture, history, and environment. It was while reading Peter Frost’s blog that I first encountered the phrase “gene-culture co-evolution,” which supplies the missing link. 

800px-National_IQ_per_country_-_estimates_by_Lynn_and_Vanhanen_2006
IQ by country

South Korea does well because 1. It’s not communist and 2. South Koreans are some of the smartest people in the world. 

I knew #1, but I could have saved myself a lot of time in the stacks if someone had just told me #2 instead of acting like SK’s economic success was a big mystery. 

The fact that every country was relatively poor before industrialization, and South Korea was particularly poor after a couple decades of warfare back and forth across the peninsula, obscures the nation’s historically high development. 

For example, the South Korean Examination system, Gwageo, was instituted in 788 (though it apparently didn’t become important until 958). Korea has had agriculture and literacy for a long time, with accompanying political and social organization. This probably has more to do with South Korea having a relatively easy time adopting the modern industrial economy than anything in particular in the governments’ plans. 

Cochran has an interesting post on his blog on Jared Diamond and Domestication: 

In fact, in my mind the real question is not why various peoples didn’t domesticate animals that we know were domesticable, but rather how anyone ever managed to domesticate the aurochs. At least twice. Imagine a longhorn on roids: they were big and aggressive, favorites in the Roman arena. … 

The idea is that at least some individual aurochs were not as hostile and fearful of humans as they ought to have been, because they were being manipulated by some parasite. … This would have made domestication a hell of a lot easier. …

The beef tape worm may not have made it through Beringia.  More generally, there were probably no parasites in the Americas that had some large mammal as intermediate host and Amerindians as the traditional definite host. 

They never mentioned parasites in gov class. 

Back to the book–I thought this was pretty interesting:

One sign of this reduced disease pressure is the unusual distribution of HLA alleles among Amerindians. the HLA system … is a group of genes that encode proteins expressed on the outer surfaces of cells. the immune system uses them to distinguish the self from non-self… their most important role is in infections disease. … 

HLA genes are among the most variable of all genes. … Because these genes are so variable, any two humans (other than identical twins) are almost certain to have a different set of them. … Natural selection therefore favors diversification of the HLA genes, and some alleles, though rare, have been persevered for a long time. In fact, some are 30 million years old, considerably older than Homo sapiens. …

But Amerindians didn’t have that diversity. Many tribes have a single HLA allele with a frequency of over 50 percent. … A careful analysis of global HLA diversity confirms continuing diversifying selection on HLA in most human populations but finds no evidence of any selection at all favoring diversity in HLA among Amerindians.

The results, of course, went very badly for the Indians–and allowed minuscule groups of Spaniards to conquer entire empires. 

The threat of European (and Asian and African) diseases wiping out native peoples continues, especially for “uncontacted” tribes. As the authors note, the Surui of Brazil numbered 800 when contacted in 1980, but only 200 in 1986, after tuberculosis had killed most of them. 

…in 1827, smallpox spared only 125 out of 1,600 Mandan Indians in what later became North Dakota.

The past is horrific. 

I find the history ancient exploration rather fascinating. Here is the frieze in Persepolis with the okapi and three Pygmies, from about 500 BC.

The authors quote Joao de Barros, a 16th century Portuguese historian: 

But it seems that for our sins, or for some inscrutable judgment of God, in all the entrances of this great Ethiopia we navigate along… He has placed a striking angel with a flaming sword of deadly fevers, who prevents us from penetrating into the interior to the springs of this garden, whence proceed these rivers of gold that flow to the sea in so many parts of our conquest.

Barros had a way with words. 

It wasn’t until quinine became widely available that Europeans had any meaningful success at conquering Africa–and even still, despite massive technological advantages, Europeans haven’t held the continent, nor have they made any significant, long-term demographic impact. 

EX-lactoseintolerance
Source: National Geographic

The book then segues into a discussion of the Indo-European expansion, which the authors suggest might have been due to the evolution of a lactase persistence gene. 

(Even though we usually refer to people as “lactose intolerance” and don’t regularly refer to people as “lactose tolerant,” it’s really tolerance that’s the oddity–most of the world’s population can’t digest lactose after childhood.

Lactase is the enzyme that breaks down lactose.)

Since the book was published, the Indo-European expansion has been traced genetically to the Yamnaya (not to be confused with the Yanomamo) people, located originally in the steppes north of the Caucasus mountains. (The Yamnaya and Kurgan cultures were, I believe, the same.) 

An interesting linguistic note: 

Uralic languages (the language family containing Finnish and Hungarian) appear to have had extensive contact with early Indo-European, and they may share a common ancestry. 

I hope these linguistic mysteries continue to be decoded. 

The authors claim that the Indo-Europeans didn’t make a huge genetic impact on Europe, practicing primarily elite dominance–but on the other hand, A Handful of Bronze-Age Men Could Have Fathered 2/3s of Europeans:

In a new study, we have added a piece to the puzzle: the Y chromosomes of the majority of European men can be traced back to just three individuals living between 3,500 and 7,300 years ago. How their lineages came to dominate Europe makes for interesting speculation. One possibility could be that their DNA rode across Europe on a wave of new culture brought by nomadic people from the Steppe known as the Yamnaya.

That’s all for now; see you next week.

Sugar

I have some hopefully good, deep stuff I am working on, but in the meanwhile, here is a quick, VERY SPECULATIVE thread on my theory for why refined sugars are probably bad for you:

First, refined sugars are evolutionarily novel. Unless you’re a Hazda, your ancient ancestors never had this much sugar.

Pick up a piece of raw sugar cane and gnaw on it. Raw sugar cane has such a high fiber to sugar content that you can use it as a toothbrush after chewing it for a bit.

According to the internet, a stick of raw sugar cane has 10 grams of sugar in it. A can of Coke has 39. Even milk (whole, skim, or fat-free) contains 12 grams of natural milk sugars (lactose) per glass. Your body has no problem handling the normal amounts of unrefined sugars in regular foods, but to get the amount of sugar found in a single soda, you’d have to eat almost four whole stalks of sugarcane, which you certainly aren’t going to do in a few minutes.

It’s when we extract all of the sugar and throw away the rest of the fiber, fat, and protein in the food that we run into trouble.

(The same is probably also true of fat, though I am rather fond of butter.)

In my opinion, all forms of heavily refined sugar are suspect, including fruit juice, which is essentially refined fructose. People think that fruit juice is “healthy” because it comes from fruit, which is a plant and therefore “natural” and “good for you,” unlike, say, sugar, which comes from sugar cane, which is… also a plant. Or HFCS, which is totally unnatural because it comes from… corn. Which is a plant.

“They actually did studies on the sugar plantations back in the early 1900s. All of the workers were healthy and lived longer than the sugar executives who got the refined, processed product.”

I don’t know if I agree with everything he has to say, but refined fructose is no more natural than any other refined sugar. Again, the amount of sugar you get from eating an apple is very different from the amount you get from a cup of apple juice.

Now people are talking about reducing childhood obesity by eliminating the scourge of 100% fruit juice:

Excessive fruit juice consumption is associated with increased risk for obesity… sucrose consumption without the corresponding fiber, as is commonly present in fruit juice, is associated with the metabolic syndrome, liver injury, and obesity.

Regular fruit is probably good for you. Refined is not.

Here’s another study on the problems with fructose:

If calcium levels in the blood are low, our bodies produce more parathyroid hormone, stimulating the absorption of calcium by the kidneys, as well as the production of vitamin D (calcitriol), also in the kidneys. Calcitriol stimulates the absorption of calcium in the intestine, decreases the production of PTH and stimulates the release of calcium from the bone. …

… Ferraris fed rats diets with high levels of glucose, fructose or starch. He and his team studied three groups of lactating rats and three groups of non-pregnant rats (the control group).

“Since the amounts of calcium channels and of binding proteins depend on the levels of the hormone calcitriol, we confirmed that calcitriol levels were much greater in lactating rats,” said Ferraris.  … “However, when the rat mothers were consuming fructose, there were no increases in calcitriol levels,” Ferraris added. “The levels remained the same as those in non-pregnant rats, and as a consequence, there were no increases in intestinal and renal calcium transport.”

You then have two options: food cravings until you eat enough to balance the nutrients, or strip bones of calcium. This is what triggers tooth decay.

Sugar not only feeds the bacteria on your teeth (I think), it also weakens your teeth to pay the piper for sugar digestion. (Also, there may be something about sugar-fed bacteria lowering the pH in your mouth.)

The second thing that happens is your taste buds acclimate to excessive sugar. Soon “Sweet” tastes “normal.”

Now when you try to stop eating sugar, normal food tastes “boring” “sour” “bitter” etc.
This is where you just have to bite the bullet and cut sugar anyway. If you keep eating normal food, eventually it will start tasting good again.

It just takes time for your brain to change its assumptions about what food tastes like.
But if you keep sweetening your food with “artificial” sweeteners, then you never give yourself a chance to recalibrate what food should taste like. You will keep craving sugar.
And it is really hard to stop eating sugar and let your body return to normal when you crave sugar.

If artificial sweeteners help you reduce sugar consumption and eventually stop using it altogether, then they’re probably a good idea, but don’t fall into the trap of thinking you’re going to get just as much cake and ice cream as always, just it won’t have any consequences anymore. No. Nature doesn’t work like that. Nature has consequences.

So I feel like I’ve been picking on fructose a lot in this post. I didn’t mean to. I am suspicious of all refined sugars; these are just the sources I happened across while researching today.

I am not sure about honey. I don’t eat a lot of honey, but maybe it’s okay. The Hadza of Tanzania eat a great deal of honey and they seem fine, but maybe they’re adapted to their diet in ways that we aren’t.

So what happens when you eat too much sugar? Aside from, obviously, food cravings, weight gain, mineral depletion, and tooth decay…

So here’s a theory:

Our bodies naturally cycle between winter and summer states. At least they do if you hail from a place that historically had winter; I can’t speak for people in radically different climates.

In the summer, plant matter (carbohydrates, fiber,) are widely available and any animal that can takes as much advantage of this as possible. As omnivores, we gorge on berries, leaves, fruits, tubers, really whatever we can. When we are satiated–when we have enough fat stores to last for the winter–our bodies start shutting down insulin production. That’s enough. We don’t need it anymore.

In the winter, there’s very little plant food naturally available, unless you’re a farmer (farming is relatively recent in areas with long winters.)

In the winter, you hunt animals for meat and fat.This is what the Inuit and Eskimo did almost all year round.

The digestion of meat and fat does not require insulin, but works on the ketogenic pathways which, long story short, also turn food into energy and keep people alive.

The real beauty of ketosis is that, apparently, it ramps up your energy production–that is, you feel physically warmer when running entirely off of meat and fat than when running off carbs. Given that ketosis is the winter digestive cycle, this is amazingly appropriate.

By spring, chances are you’ve lost a lot of the weight from last summer. Winters are harsh. With the fat gone, the body starts producing insulin again.

At this point, you go from hyperglycemia (too much sugar in your bloodstream if you eat anything sweet, due to no insulin,) to hypoglycemia–your body produces a lot of insulin to transform any plants you eat into energy FAST. (Remember the discussion above about how your body transforms fructose into fat? Back in our ancestral environment, that was a feature, not a bug!)

This lets you put on pounds quickly in the spring and summer, using now-available plants as your primary food source.

The difficulty with our society is we’ve figured out how to take the energy part out of the plants, refine it, and store up huge quantities of it so we can eat it any time we want, which is all the time.

Evolution makes us want to eat, obviously. Ancestors who didn’t have a good “eat now” drive didn’t eat whatever good food was available and didn’t become ancestors.

But now we’ve hacked that, and as a result we never go into the sugar-free periods we were built to occasionally endure.

I don’t think you need to go full keto or anti-bread or something to make up for this. Just cutting down on refined sugars (and most refined oils, btw) is probably enough for most people.

Note: Humans have been eating grains for longer than the domestication of plants–there’s a reason we thought it was a good idea to domesticate grains in the first place, and it wasn’t because they were a random, un-eaten weed. If your ancestors ate bread, then there’s a good chance that you can digest bread just fine.

But if bread causes you issues, then by all means, avoid it. Different people thrive on different foods.

Please remember that this thread is speculative.

AND FOR GOODNESS SAKES DON’T PUT SUGAR IN FRUIT THINGS. JAM DOES NOT NEED SUGAR. NEITHER DOES PIE.

IF YOU ARE USING DECENT FRUIT THEN YOU DON’T NEED SUGAR. THE ONLY REASON YOU NEED SUGAR IS IF YOUR FRUIT IS CRAP. THEN JUST GO EAT SOMETHING ELSE.

 

Tapeworm-cancer-AIDS is a real thing

Tapeworm Spreads Deadly Cancer to Human:

A Colombian man’s lung tumors turned out to have an extremely unusual cause: The rapidly growing masses weren’t actually made of human cells, but were from a tapeworm living inside him, according to a report of the case.

This is the first known report of a person becoming sick from cancer cells that developed in a parasite, the researchers said.

“We were amazed when we found this new type of disease—tapeworms growing inside a person, essentially getting cancer, that spreads to the person, causing tumors,” said study researcher Dr. Atis Muehlenbachs, a staff pathologist at the Centers for Disease Control and Prevention’s Infectious Diseases Pathology Branch (IDPB).

The man had HIV, which weakens the immune system and likely played a role in allowing the development of the parasite cancer, the researchers said.

There’s not a lot I can add to this.

But there are probably more cases like this, if only because gay men seem to contract a lot of parasites:

Fast forward to the spring of 2017. PreP had recently ushered in the second sexual revolution and everyone was now fucking each other like it was 1979. My wonderful boyfriend and I enjoyed a healthy sex life inside and outside our open relationship. Then he started experiencing stomach problems: diarrhea, bloating, stomach aches, nausea. All too familiar with those symptoms, I recommended he go to the doctor and ask for a stool test. …

His results came back positive for giardia. …

Well, just a few months later, summer of 2017, my boyfriend started experiencing another bout of diarrhea and stomach cramps. … This time the results came back positive for entamoeba histolytica. What the fuck is entamoeba histolytica?! I knew giardia. Giardia and I were on a first name basis. But entamoeba, what now?

Entamoeba histolytica, as it turns out, is another parasite common in developing countries spread through contaminated drinking water, poor hygiene when handling food, and…rimming. The PA treating him wasn’t familiar with entamoeba histolytica or how to treat it, so she had to research (Google?) how to handle the infection. The medical literature (Google search results?) led us back to metronidazole, the same antibiotic used to treat giardia.

When your urge to lick butts is so strong that this keeps happening, you’ve got to consider an underlying condition like toxoplasmosis or kamikaze horsehair worm.

Some Migration-Related Studies

I have too many tabs open on my computer, so here are some studies/writings which all touch on migration/population movements in some way:

Biographical Memoirs of Henry Harpending [pdf]:

The late Henry Harpending of West Hunter blog, along with Greg Cochran, wrote the 10,000 Year Explosion, did anthropological field work among the Ju/’hoansi, and pioneered population genetics. The biography has many interesting parts:

Henry’s early research on population genetics also helped establish the close relationship between genetics and geography. Genetic differences between groups tend to mirror the geographic distance between them, so that a map of genetic distances looks like a geographic map (Harpending and Jenkins, 1973). Henry developed methods for studying this relationship that are still in use. …

Meanwhile, Henry’s Kalahari field experience also motivated an interest in population ecology. Humans cope with variation in resource supply either by storage (averaging over time) or by mobility and sharing (averaging over space). These strategies are mutually exclusive. Those who store must defend their stored resources against others who would like to share them. Conversely, an ethic of sharing makes storage impossible. The contrast between the mobile and the sedentary Ju/’hoansi in Henry’s sample therefore represented a fundamental shift in strategy. …

Diseases need time to cause lesions on bone. If the infected individual dies quickly, no lesion will form, and the skeleton will look healthy. Lesions form only if the infected individual is healthy enough to survive for an extended period. Lesions on ancient bone may therefore imply that the population was healthy! …

In the 1970s, as Henry’s interest in genetic data waned, he began developing population genetic models of social evolution. He overturned 40 years of conventional wisdom by showing that group selection works best not when groups are isolated but when they are strongly connected by gene flow (1980, pp. 58-59; Harpending and Rogers, 1987). When gene flow is restricted, successful mutants cannot spread beyond the initial group, and group selection stalls.

Genetic Consequences of Social Stratification in Great Britain:

Human DNA varies across geographic regions, with most variation observed so far reflecting distant ancestry differences. Here, we investigate the geographic clustering of genetic variants that influence complex traits and disease risk in a sample of ~450,000 individuals from Great Britain. Out of 30 traits analyzed, 16 show significant geographic clustering at the genetic level after controlling for ancestry, likely reflecting recent migration driven by socio-economic status (SES). Alleles associated with educational attainment (EA) show most clustering, with EA-decreasing alleles clustering in lower SES areas such as coal mining areas. Individuals that leave coal mining areas carry more EA-increasing alleles on average than the rest of Great Britain. In addition, we leveraged the geographic clustering of complex trait variation to further disentangle regional differences in socio-economic and cultural outcomes through genome-wide association studies on publicly available regional measures, namely coal mining, religiousness, 1970/2015 general election outcomes, and Brexit referendum results.

Let’s hope no one reports on this as “They found the Brexit gene!”

Can you Move to Opportunity? Evidence from the Great Migration [PDF]:

The northern United States long served as a land of opportunity for black Americans, but today the region’s racial gap in intergenerational mobility rivals that of the South. I show that racial composition changes during the peak of the Great
Migration (1940-1970) reduced upward mobility in northern cities in the long run,
with the largest effects on black men. I identify urban black population increases
during the Migration at the commuting zone level using a shift-share instrument,
interacting pre-1940 black southern migrant location choices with predicted outmigration from southern counties. The Migration’s negative effects on children’s
adult outcomes appear driven by neighborhood factors, not changes in the characteristics of the average child. As early as the 1960s, the Migration led to greater white enrollment in private schools, increased spending on policing, and higher crime and incarceration rates. I estimate that the overall change in childhood environment induced by the Great Migration explains 43% of the upward mobility gap between black and white men in the region today.

43% is huge and, IMO, too big. However, the author may be on to something.

Lineage Specific Histories of Mycobacterium Tuberculosis Dispersal in Africa and Eurasia:

Mycobacterium tuberculosis (M.tb) is a globally distributed, obligate pathogen of humans that can be divided into seven clearly defined lineages. … We reconstructed M.tb migration in Africa and Eurasia, and investigated lineage specific patterns of spread. Applying evolutionary rates inferred with ancient M.tb genome calibration, we link M.tb dispersal to historical phenomena that altered patterns of connectivity throughout Africa and Eurasia: trans-Indian Ocean trade in spices and other goods, the Silk Road and its predecessors, the expansion of the Roman Empire and the European Age of Exploration. We find that Eastern Africa and Southeast Asia have been critical in the dispersal of M.tb.

I spend a surprising amount of time reading about mycobacteria.

Invasive Memes

 

220px-Smallpox_virus_virions_TEM_PHIL_1849
Smallpox virus

Do people eventually grow ideologically resistant to dangerous local memes, but remain susceptible to foreign memes, allowing them to spread like invasive species?

And if so, can we find some way to memetically vaccinate ourselves against deadly ideas?

***

Memetics is the study of how ideas (“memes”) spread and evolve, using evolutionary theory and epidemiology as models. A “viral meme” is one that spreads swiftly through society, “infecting” minds as it goes.

Of course, most memes are fairly innocent (e.g. fashion trends) or even beneficial (“wash your hands before eating to prevent disease transmission”), but some ideas, like communism, kill people.

Ideologies consist of a big set of related ideas rather than a single one, so let’s call them memeplexes.

Almost all ideological memeplexes (and religions) sound great on paper–they have to, because that’s how they spread–but they are much more variable in actual practice.

Any idea that causes its believers to suffer is unlikely to persist–at the very least, because its believers die off.

Over time, in places where people have been exposed to ideological memeplexes, their worst aspects become known and people may learn to avoid them; the memeplexes themselves can evolve to be less harmful.

Over in epidemiology, diseases humans have been exposed to for a long time become less virulent as humans become adapted to them. Chickenpox, for example, is a fairly mild disease that kills few people because the virus has been infecting people for as long as people have been around (the ancestral Varicella-Zoster virus evolved approximately 65 million years ago and has been infecting animals ever since). Rather than kill you, chickenpox prefers to enter your nerves and go dormant for decades, reemerging later as shingles, ready to infect new people.

By contrast, smallpox (Variola major and Variola minor) probably evolved from a rodent-infecting virus about 16,000 to 68,000 years ago. That’s a big range, but either way, it’s much more recent than chickenpox. Smallpox made its first major impact on the historical record around the third century BC, Egypt, and thereafter became a recurring plague in Africa and Eurasia. Note that unlike chickenpox, which is old enough to have spread throughout the world with humanity, smallpox emerged long after major population splits occurred–like part of the Asian clade splitting off and heading into the Americas.

By 1400, Europeans had developed some immunity to smallpox (due to those who didn’t have any immunity dying), but when Columbus landed in the New World, folks here had had never seen the disease before–and thus had no immunity. Diseases like smallpox and measles ripped through native communities, killing approximately 90% of the New World population.

If we extend this metaphor back to ideas–if people have been exposed to an ideology for a long time, they are more likely to have developed immunity to it or the ideology to have adapted to be relatively less harmful than it initially was. For example, the Protestant Reformation and subsequent Catholic counter-reformation triggered a series of European wars that killed 10 million people, but today Catholics and Protestants manage to live in the same countries without killing each other. New religions are much more likely to lead all of their followers in a mass suicide than old, established religions; countries that have just undergone a political revolution are much more likely to kill off large numbers of their citizens than ones that haven’t.

This is not to say that old ideas are perfect and never harmful–chickenpox still kills people and is not a fun disease–but that any bad aspects are likely to become more mild over time as people wise up to bad ideas, (certain caveats applying).

But this process only works for ideas that have been around for a long time. What about new ideas?

You can’t stop new ideas. Technology is always changing. The world is changing, and it requires new ideas to operate. When these new ideas arrive, even terrible ones can spread like wildfire because people have no memetic antibodies to resist them. New memes, in short, are like invasive memetic species.

In the late 1960s, 15 million people still caught smallpox every year. In 1980, it was declared officially eradicated–not one case had been seen since 1977, due to a massive, world-wide vaccination campaign.

Humans can acquire immunity to disease in two main ways. The slow way is everyone who isn’t immune dying; everyone left alive happens to have adaptations that let them not die, which they can pass on to their children. As with chickenpox, over generations, the disease becomes less severe because humans become successively more adapted to it.

The fast way is to catch a disease, produce antibodies that recognize and can fight it off, and thereafter enjoy immunity. This, of course, assumes that you survive the disease.

Vaccination works by teaching body’s immune system to recognize a disease without infecting it with a full-strength germ, using a weakened or harmless version of the germ, instead. Early on, weakened germs from actual smallpox scabs or lesions to inoculate people, a risky method since the germs often weren’t that weak. Later, people discovered that cowpox was similar enough to smallpox that its antibodies could also fight smallpox, but cowpox itself was too adapted to cattle hosts to seriously harm humans. (Today I believe the vaccine uses a different weakened virus, but the principle is the same.)

The good part about memes is that you do not actually have to inject a physical substance into your body in order to learn about them.

Ideologies are very difficult to evaluate in the abstract, because, as mentioned, they are all optimized to sound good on paper. It’s their actual effects we are interested in.

So if we want to learn whether an idea is good or not, it’s probably best not to learn about it by merely reading books written by its advocates. Talk to people in places where the ideas have already been tried and learn from their experiences. If those people tell you this ideology causes mass suffering and they hate it, drop it like a hot potato. If those people are practicing an “impure” version of the ideology, it’s probably an improvement over the original.

For example, “communism” as practiced in China today is quite different from “communism” as practiced there 50 years ago–so much so that the modern system really isn’t communism at all. There was never, to my knowledge, an official changeover from one system to another, just a gradual accretion of improvements. This speaks strongly against communism as an ideology, since no country has managed to be successful by moving toward ideological communist purity, only by moving away from it–though they may still find it useful to retain some of communism’s original ideas.

I think there is a similar dynamic occurring in many Islamic countries. Islam is a relatively old religion that has had time to adapt to local conditions in many different parts of the world. For example, in Morocco, where the climate is more favorable to raising pigs than in other parts of the Islamic world, the taboo against pigs isn’t as strongly observed. The burka is not an Islamic universal, but characteristic of central Asia (the similar niqab is from Yemen). Islamic head coverings vary by culture–such as this kurhars, traditionally worn by unmarried women in Ingushetia, north of the Caucuses, or this cap, popular in Xianjiang. Turkey has laws officially restricting burkas in some areas, and Syria discourages even hijabs. Women in Iran did not go heavily veiled prior to the Iranian Revolution. So the insistence on extensive veiling in many Islamic communities (like the territory conquered by ISIS) is not a continuation of old traditions, but the imposition of a new, idealized, version of Islam.

Purity is counter to practicality.

Of course, this approach is hampered by the fact that what works in one place, time, and community may not work in a different one. Tilling your fields one way works in Europe, and tilling them a different way works in Papua New Guinea. But extrapolating from what works is at least a good start.

 

 

Did tobacco become popular because it kills parasites?

While reading about the conditions in a Burmese prison around the turn of the previous century (The History and Romance of Crime: Oriental Prisons, by Arthur Griffiths)(not good) it occurred to me that there might have been some beneficial effect of the large amounts of tobacco smoke inside the prison. Sure, in the long run, tobacco is highly likely to give you cancer, but in the short run, is it noxious to fleas and other disease-bearing pests?

Meanwhile in Melanesia, (Pygmies and Papuans,) a group of ornithologists struggled up a river to reach an almost completely isolated tribe of Melanesians that barely practiced horticulture; even further up the mountain they met a band of pygmies (negritoes) whose existence had only been rumored of; the pygmies cultivated tobacco, which they traded with their otherwise not terribly interested in trading for worldy goods neighbors.

The homeless smoke at rates 3x higher than the rest of the population, though this might have something to do with the high correlation between schizophrenia and smoking–80% of schizophrenics smoke, compared to 20% of the general population. Obviously this correlation is best explained by tobacco’s well-noted psychological effects (including addiction,) but why is tobacco so ubiquitous in prisons that cigarettes are used as currency? Could they have, in unsanitary conditions, some healthful purpose?

From NPR: Pot For Parasites? Pygmy Men Smoke out Worms:

On average, the more THC byproduct that Hagen’s team found in an Aka man’s urine, the fewer worm eggs were present in his gut.

“The heaviest smokers, with everything else being equal, had about half the number of parasitic eggs in their stool, compared to everyone else,” Hagen says. …

THC — and nicotine — are known to kill intestinal worms in a Petri dish. And many worms make their way to the gut via the lungs. “The worms’ larval stage is in the lung,” Hagan says. “When you smoke you just blast them with THC or nicotine directly.”

Smithsonian reports that Birds Harness the Deadly Power of Nicotine to Poison Parasites:

Smoking kills. But if you’re a bird and if you want to kill parasites, that can be a good thing. City birds have taken to stuffing their nests with cigarette butts to poison potential parasites. Nature reports:

“In a study published today in Biology Letters, the researchers examined the nests of two bird species common on the North American continent. They measured the amount of cellulose acetate (a component of cigarette butts) in the nests, and found that the more there was, the fewer parasitic mites the nest contained.”

Out in the State of Nature, parasites are extremely common and difficult to get rid of (eg, hookworm elimination campaigns in the early 1900s found that 40% of school-aged children were infected); farmers can apparently use tobacco as a natural de-wormer (but be careful, as tobacco can be poisonous.)

In the pre-modern environment, when many people had neither shoes, toilets, nor purified water, parasites were very hard to avoid.
Befoundalive recommends eating the tobacco from a cigarette if you have intestinal parasites and no access to modern medicine.

Here’s a study comparing parasite rates in tobacco workers vs. prisoners in Ethiopia:

Overall, 8 intestinal parasite species have been recovered singly or in combinations from 146 (61.8 %) samples. The prevalence in prison population (88/121 = 72.7%) was significantly higher than that in tobacco farm (58/115 = 50.4%).

In vitro anthelmintic effect of Tobacco (Nicotiana tabacum) extract on parasitic nematode, Marshallagia marshalli reports:

Because of developing resistance to the existing anthelmintic drugs, there is a need for new anthelmintic agents. Tobacco plant has alkaloid materials that have antiparasitic effect. We investigated the in vitro anthelminthic effect of aqueous and alcoholic extract of Tobacco (Nicotiana tabacum) against M. marshalli. … Overall, extracts of Tobacco possess considerable anthelminthic activity and more potent effects were observed with the highest concentrations. Therefore, the in vivo study on Tobocco in animal models is recommended.

(Helminths are parasites; anthelmintic=anti-parasites.)

So it looks like, at least in the pre-sewers and toilets and clean water environment when people struggled to stay parasite free, tobacco (and certain other drugs) may have offered people an edge over the pests. (I’ve noticed many bitter or noxious plants seem to have been useful for occasionally flushing out parasites, but you certainly don’t want to be in a state of “flush” all the time.)

It looks like it was only when regular sanitation got good enough that we didn’t have to worry about parasites anymore that people started getting really concerned with tobacco’s long-term negative effects on humans.

Is Crohn’s Disease Tuberculosis of the Intestines?

Source: Rise in Crohn’s Disease admission rates, Glasgow

Crohn‘s is an inflammatory disease of the digestive tract involving diarrhea, vomiting internal lesions, pain, and severe weight loss. Left untreated, Crohn’s can lead to death through direct starvation/malnutrition, infections caused by the intestinal walls breaking down and spilling feces into the rest of the body, or a whole host of other horrible symptoms, like pyoderma gangrenosum–basically your skin just rotting off.

Crohn’s disease has no known cause and no cure, though several treatments have proven effective at putting it into remission–at least temporarily.

The disease appears to be triggered by a combination of environmental, bacterial, and genetic factors–about 70 genes have been identified so far that appear to contribute to an individual’s chance of developing Crohn’s, but no gene has been found yet that definitely triggers it. (The siblings of people who have Crohn’s are more likely than non-siblings to also have it, and identical twins of Crohn’s patients have a 55% chance of developing it.) A variety of environmental factors, such as living in a first world country, (parasites may be somewhat protective against the disease), smoking, or eating lots of animal protein also correlate with Crohn’s, but since only 3.2/1000 people even in the West have it’s, these obviously don’t trigger the disease in most people.

Crohn’s appears to be a kind of over-reaction of the immune system, though not specifically an auto-immune disorder, which suggests that a pathogen of some sort is probably involved. Most people are probably able to fight off this pathogen, but people with a variety of genetic issues may have more trouble–according to Wikipedia, “There is considerable overlap between susceptibility loci for IBD and mycobacterial infections.[62] ” Mycobacteria are a genus of of bacteria that includes species like tuberculosis and leprosy. A variety of bacteria–including specific strains of e coli, yersinia, listeria, and Mycobacterium avium subspecies paratuberculosis–are found in the intestines of Crohn’s suffers at higher rates than in the intestines of non-sufferers (intestines, of course, are full of all kinds of bacteria.)

Source: The Gutsy Group

Crohn’s treatment depends on the severity of the case and specific symptoms, but often includes a course of antibiotics, (especially if the patient has abscesses,) tube feeding (in acute cases where the sufferer is having trouble digesting food,) and long-term immune-system suppressants such as prednisone, methotrexate, or infliximab. In severe cases, damaged portions of the intestines may be cut out. Before the development of immunosuppressant treatments, sufferers often progressively lost more and more of their intestines, with predictably unpleasant results, like no longer having a functioning colon. (70% of Crohn’s sufferers eventually have surgery.)

A similar disease, Johne’s, infects cattle. Johne’s is caused by Mycobacterium avium subspecies paratuberculosis, (hereafter just MAP). MAP typically infects calves at birth, transmitted via infected feces from their mothers, incubates for two years, and then manifests as diarrhea, malnutrition, dehydration, wasting, starvation, and death. Luckily for cows, there’s a vaccine, though any infectious disease in a herd is a problem for farmers.

If you’re thinking that “paratuberculosis” sounds like “tuberculosis,” you’re correct. When scientists first isolated it, they thought the bacteria looked rather like tuberculosis, hence the name, “tuberculosis-like.” The scientists’ instincts were correct, and it turns out that MAP is in the same bacterial genus as tuberculosis and leprosy (though it may be more closely related to leprosy than TB.) (“Genus” is one step up from “species;” our species is “homo Sapiens;” our genus, homo, we share with homo Neanderthalis, homo Erectus, etc, but chimps and gorillas are not in the homo genus.)

A: Crohn’s Disease in Humans. Figure B: Johne’s Disease in Animals. Greenstein Lancet Infectious Disease, 2004, H/T Human Para Foundation

The intestines of cattle who have died of MAP look remarkably like the intestines of people suffering from advanced Crohn’s disease.

MAP can actually infect all sorts of mammals, not just cows, it’s just more common and problematic in cattle herds. (Sorry, we’re not getting through this post without photos of infected intestines.)

So here’s how it could work:

The MAP bacteria–possibly transmitted via milk or meat products–is fairly common and infects a variety of mammals. Most people who encounter it fight it off with no difficulty (or perhaps have a short bout of diarrhea and then recover.)

A few people, though, have genetic issues that make it harder for them to fight off the infection. For example, Crohn’s sufferers produce less intestinal mucus, which normally acts as a barrier between the intestines and all of the stuff in them.

Interestingly, parasite infections can increase intestinal mucus (some parasites feed on mucus), which in turn is protective against other forms of infection; decreasing parasite load can increase the chance of other intestinal infections.

Once MAP enters the intestinal walls, the immune system attempts to fight it off, but a genetic defect in microphagy results in the immune cells themselves getting infected. The body responds to the signs of infection by sending more immune cells to fight it, which subsequently also get infected with MAP, triggering the body to send even more immune cells. These lumps of infected cells become the characteristic ulcerations and lesions that mark Crohn’s disease and eventually leave the intestines riddled with inflamed tissue and holes.

The most effective treatments for Crohn’s, like Infliximab, don’t target infection but the immune system. They work by interrupting the immune system’s feedback cycle so that it stops sending more cells to the infected area, giving the already infected cells a chance to die. It doesn’t cure the disease, but it does give the intestines time to recover.

Unfortunately, this means infliximab raises your chance of developing TB:

There were 70 reported cases of tuberculosis after treatment with infliximab for a median of 12 weeks. In 48 patients, tuberculosis developed after three or fewer infusions. … Of the 70 reports, 64 were from countries with a low incidence of tuberculosis. The reported frequency of tuberculosis in association with infliximab therapy was much higher than the reported frequency of other opportunistic infections associated with this drug. In addition, the rate of reported cases of tuberculosis among patients treated with infliximab was higher than the available background rates.

because it is actively suppressing the immune system’s ability to fight diseases in the TB family.

Luckily, if you live in the first world and aren’t in prison, you’re unlikely to catch TB–only about 5-10% of the US population tests positive for TB, compared to 80% in many African and Asian countries. (In other words, increased immigration from these countries will absolutely put Crohn’s suffers at risk of dying.)

There are a fair number of similarities between Crohn’s, TB, and leprosy is that they are all very slow diseases that can take years to finally kill you. By contrast, other deadly diseases, like smallpox, cholera, and yersinia pestis (plague), spread and kill extremely quickly. Within about two weeks, you’ll definitely know if your plague infection is going to kill you or not, whereas you can have leprosy for 20 years before you even notice it.

TB, like Crohn’s, creates granulomas:

Tuberculosis is classified as one of the granulomatous inflammatory diseases. Macrophages, T lymphocytes, B lymphocytes, and fibroblasts aggregate to form granulomas, with lymphocytes surrounding the infected macrophages. When other macrophages attack the infected macrophage, they fuse together to form a giant multinucleated cell in the alveolar lumen. The granuloma may prevent dissemination of the mycobacteria and provide a local environment for interaction of cells of the immune system.[63] However, more recent evidence suggests that the bacteria use the granulomas to avoid destruction by the host’s immune system. … In many people, the infection waxes and wanes.

Crohn’s also waxes and wanes. Many sufferers experience flare ups of the disease, during which they may have to be hospitalized, tube fed, and put through another round of antibiotics or sectioning (surgical removal of the intestines) before they improve–until the disease flares up again.

Leprosy is also marked by lesions, though of course so are dozens of other diseases.

Note: Since Crohn’s is a complex, multi-factorial disease, there may be more than one bacteria or pathogen that could infect people and create similar results. Alternatively, Crohn’s sufferers may simply have intestines that are really bad at fighting off all sorts of diseases, as a side effect of Crohn’s, not a cause, resulting in a variety of unpleasant infections.

The MAP hypothesis suggests several possible treatment routes:

  1. Improving the intestinal mucus, perhaps via parasites or medicines derived from parasites
  2. Improving the intestinal microbe balance
  3. Antibiotics that treat Map
  4. Anti-MAP vaccine similar to the one for Johne’s disease in cattle
  5. Eliminate map from the food supply

Here’s an article about the parasites and Crohn’s:

To determine how the worms could be our frenemies, Cadwell and colleagues tested mice with the same genetic defect found in many people with Crohn’s disease. Mucus-secreting cells in the intestines malfunction in the animals, reducing the amount of mucus that protects the gut lining from harmful bacteria. Researchers have also detected a change in the rodents’ microbiome, the natural microbial community in their guts. The abundance of one microbe, an inflammation-inducing bacterium in the Bacteroides group, soars in the mice with the genetic defect.

The researchers found that feeding the rodents one type of intestinal worm restored their mucus-producing cells to normal. At the same time, levels of two inflammation indicators declined in the animals’ intestines. In addition, the bacterial lineup in the rodents’ guts shifted, the team reports online today in Science. Bacteroides’s numbers plunged, whereas the prevalence of species in a different microbial group, the Clostridiales, increased. A second species of worm also triggers similar changes in the mice’s intestines, the team confirmed.

To check whether helminths cause the same effects in people, the scientists compared two populations in Malaysia: urbanites living in Kuala Lumpur, who harbor few intestinal parasites, and members of an indigenous group, the Orang Asli, who live in a rural area where the worms are rife. A type of Bacteroides, the proinflammatory microbes, predominated in the residents of Kuala Lumpur. It was rarer among the Orang Asli, where a member of the Clostridiales group was plentiful. Treating the Orang Asli with drugs to kill their intestinal worms reversed this pattern, favoring Bacteroides species over Clostridiales species, the team documented.

This sounds unethical unless they were merely tagging along with another team of doctors who were de-worming the Orangs for normal health reasons and didn’t intend on potentially inflicting Crohn’s on people. Nevertheless, it’s an interesting study.

At any rate, so far they haven’t managed to produce an effective medicine from parasites, possibly in part because people think parasites are icky.

But if parasites aren’t disgusting enough for you, there’s always the option of directly changing the gut bacteria: fecal microbiota transplants (FMT).  A fecal transplant is exactly what it sounds like: you take the regular feces out of the patient and put in new, fresh feces from an uninfected donor. (When your other option is pooping into a bag for the rest of your life because your colon was removed, swallowing a few poop pills doesn’t sound so bad.) EG, Fecal microbiota transplant for refractory Crohn’s:

Approximately one-third of patients with Crohn’s disease do not respond to conventional treatments, and some experience significant adverse effects, such as serious infections and lymphoma, and many patients require surgery due to complications. .. Herein, we present a patient with Crohn’s colitis in whom biologic therapy failed previously, but clinical remission and endoscopic improvement was achieved after a single fecal microbiota transplantation infusion.

Here’s a Chinese doctor who appears to have good success with FMTs to treat Crohn’s–improvement in 87% of patients one month after treatment and remission in 77%, though the effects may wear off over time. Note: even infliximab, considered a “wonder drug” for its amazing abilities, only works for about 50-75% of patients, must be administered via regular IV infusions for life (or until it stops working,) costs about $20,000 a year per patient, and has some serious side effects, like cancer. If fecal transplants can get the same results, that’s pretty good.

Little known fact: “In the United States, the Food and Drug Administration (FDA) has regulated human feces as an experimental drug since 2013.”

Antibiotics are another potential route. The Redhill Biopharma is conducting a phase III clinical study of antibiotics designed to fight MAP in Crohn’s patients. Redhill is expected to release some of their results in April.

A Crohn’s MAP vaccine trial is underway in healthy volunteers:

Mechanism of action: The vaccine is what is called a ‘T-cell’ vaccine. T-cells are a type of white blood cell -an important player in the immune system- in particular, for fighting against organisms that hide INSIDE the body’s cells –like MAP does. Many people are exposed to MAP but most don’t get Crohn’s –Why? Because their T-cells can ‘see’ and destroy MAP. In those who do get Crohn’s, the immune system has a ‘blind spot’ –their T-cells cannot see MAP. The vaccine works by UN-BLINDING the immune system to MAP, reversing the immune dysregulation and programming the body’s own T-cells to seek out and destroy cells containing MAP. For general information, there are two informative videos about T Cells and the immune system below.

Efficacy: In extensive tests in animals (in mice and in cattle), 2 shots of the vaccine spaced 8 weeks apart proved to be a powerful, long-lasting stimulant of immunity against MAP. To read the published data from the trial in mice, click here. To read the published data from the trial in cattle, click here.

Before: Fistula in the intestines, 31 year old Crohn’s patient–Dr Borody, Combining infliximab, anti-MAP and hyperbaric oxygen therapy for resistant fistulizing Crohn’s disease

Dr. Borody (who was influential in the discovery that ulcers are caused by the h. pylori bacteria and not stress,) has had amazing success treating Crohn’s patients with a combination of infliximab, anti-MAP antibiotics, and hyperbaric oxygen. Here are two of his before and after photos of the intestines of a 31 yr old Crohn’s sufferer:

Here are some more interesting articles on the subject:

Sources: Is Crohn’s Disease caused by a Mycobacterium? Comparisons with Tuberculosis, Leprosy, and Johne’s Disease.

What is MAP?

Researcher Finds Possible link Between Cattle and Human Diseases:

Last week, Davis and colleagues in the U.S. and India published a case report in Frontiers of Medicine http://journal.frontiersin.org/article/10.3389/fmed.2016.00049/full . The report described a single patient, clearly infected with MAP, with the classic features of Johne’s disease in cattle, including the massive shedding of MAP in his feces. The patient was also ill with clinical features that were indistinguishable from the clinical features of Crohn’s. In this case though, a novel treatment approach cleared the patient’s infection.

The patient was treated with antibiotics known to be effective for tuberculosis, which then eliminated the clinical symptoms of Crohn’s disease, too.

After: The same intestines, now healed

Psychology Today: Treating Crohn’s Disease:

Through luck, hard work, good fortune, perseverance, and wonderful doctors, I seem to be one of the few people in the world who can claim to be “cured” of Crohn’s Disease. … In brief, I was treated for 6 years with medications normally used for multidrug resistant TB and leprosy, under the theory that a particular germ causes Crohn’s Disease. I got well, and have been entirely well since 2004. I do not follow a particular diet, and my recent colonoscopies and blood work have shown that I have no inflammation. The rest of these 3 blogs will explain more of the story.

What about removing Johne’s disease from the food supply? Assuming Johne’s is the culprit, this may be hard to do, (it’s pretty contagious in cattle, can lie dormant for years, and survives cooking) but drinking ultrapasteurized milk may be protective, especially for people who are susceptible to the disease.

***

However… there are also studies that contradict the MAP theory. For example, a recent study of the rate of Crohn’s disease in people exposed to Johne’s disease found no correllation. (However, Crohn’s is a pretty rare condition, and the survey only found 7 total cases, which is small enough that random chance could be a factor, but we are talking about people who probably got very up close and personal with feces infected with MAP.)

Another study found a negative correlation between Crohn’s and milk consumption:

Logistic regression showed no significant association with measures of potential contamination of water sources with MAP, water intake, or water treatment. Multivariate analysis showed that consumption of pasteurized milk (per kg/month: odds ratio (OR) = 0.82, 95% confidence interval (CI): 0.69, 0.97) was associated with a reduced risk of Crohn’s disease. Meat intake (per kg/month: OR = 1.40, 95% CI: 1.17, 1.67) was associated with a significantly increased risk of Crohn’s disease, whereas fruit consumption (per kg/month: OR = 0.78, 95% CI: 0.67, 0.92) was associated with reduced risk.

So even if Crohn’s is caused by MAP or something similar, it appears that people aren’t catching it from milk.

There are other theories about what causes Crohn’s–these folks, for example, think it’s related to consumption of GMO corn. Perhaps MAP has only been found in the intestines of Crohn’s patients because people with Crohn’s are really bad at fighting off infections. Perhaps the whole thing is caused by weird gut bacteria, or not enough parasites, insufficient Vitamin D, or industrial pollution.

The condition remains very much a mystery.

2 Interesting studies: Early Humans in SE Asia and Genetics, Relationships, and Mental Illness

Ancient Teeth Push Back Early Arrival of Humans in Southeast Asia :

New tests on two ancient teeth found in a cave in Indonesia more than 120 years ago have established that early modern humans arrived in Southeast Asia at least 20,000 years earlier than scientists previously thought, according to a new study. …

The findings push back the date of the earliest known modern human presence in tropical Southeast Asia to between 63,000 and 73,000 years ago. The new study also suggests that early modern humans could have made the crossing to Australia much earlier than the commonly accepted time frame of 60,000 to 65,000 years ago.

I would like to emphasize that nothing based on a couple of teeth is conclusive, “settled,” or “proven” science. Samples can get contaminated, machines make errors, people play tricks–in the end, we’re looking for the weight of the evidence.

I am personally of the opinion that there were (at least) two ancient human migrations into south east Asia, but only time will tell if I am correct.

Genome-wide association study of social relationship satisfaction: significant loci and correlations with psychiatric conditions, by Varun Warrier, Thomas Bourgeron, Simon Baron-Cohen:

We investigated the genetic architecture of family relationship satisfaction and friendship satisfaction in the UK Biobank. …

In the DSM-55, difficulties in social functioning is one of the criteria for diagnosing conditions such as autism, anorexia nervosa, schizophrenia, and bipolar disorder. However, little is known about the genetic architecture of social relationship satisfaction, and if social relationship dissatisfaction genetically contributes to risk for psychiatric conditions. …

We present the results of a large-scale genome-wide association study of social
relationship satisfaction in the UK Biobank measured using family relationship satisfaction and friendship satisfaction. Despite the modest phenotypic correlations, there was a significant and high genetic correlation between the two phenotypes, suggesting a similar genetic architecture between the two phenotypes.

Note: the two “phenotypes” here are “family relationship satisfaction” and “friendship satisfaction.”

We first investigated if the two phenotypes were genetically correlated with
psychiatric conditions. As predicted, most if not all psychiatric conditions had a significant negative correlation for the two phenotypes. … We observed significant negative genetic correlation between the two phenotypes and a large cross-condition psychiatric GWAS38. This underscores the importance of social relationship dissatisfaction in psychiatric conditions. …

In other words, people with mental illnesses generally don’t have a lot of friends nor get along with their families.

One notable exception is the negative genetic correlation between measures of cognition and the two phenotypes. Whilst subjective wellbeing is positively genetically correlated with measures of cognition, we identify a small but statistically significant negative correlation between measures of correlation and the two phenotypes.

Are they saying that smart people have fewer friends? Or that dumber people are happier with their friends and families? I think they are clouding this finding in intentionally obtuse language.

A recent study highlighted that people with very high IQ scores tend to report lower satisfaction with life with more frequent socialization.

Oh, I think I read that one. It’s not the socialization per se that’s the problem, but spending time away from the smart person’s intellectual activities. For example, I enjoy discussing the latest genetics findings with friends, but I don’t enjoy going on family vacations because they are a lot of work that does not involve genetics. (This is actually something my relatives complain about.)

…alleles that increase the risk for schizophrenia are in the same haplotype as
alleles that decrease friendship satisfaction. The functional consequences of this locus must be formally tested. …

Loss of function mutations in these genes lead to severe biochemical consequences, and are implicated in several neuropsychiatric conditions. For
example, de novo loss of function mutations in pLI intolerant genes confers significant risk for autism. Our results suggest that pLI > 0.9 genes contribute to psychiatric risk through both common and rare genetic variation.

Evolution is slow–until it’s fast: Genetic Load and the Future of Humanity

Source: Priceonomics

A species may live in relative equilibrium with its environment, hardly changing from generation to generation, for millions of years. Turtles, for example, have barely changed since the Cretaceous, when dinosaurs still roamed the Earth.

But if the environment changes–critically, if selective pressures change–then the species will change, too. This was most famously demonstrated with English moths, which changed color from white-and-black speckled to pure black when pollution darkened the trunks of the trees they lived on. To survive, these moths need to avoid being eaten by birds, so any moth that stands out against the tree trunks tends to get turned into an avian snack. Against light-colored trees, dark-colored moths stood out and were eaten. Against dark-colored trees, light-colored moths stand out.

This change did not require millions of years. Dark-colored moths were virtually unknown in 1810, but by 1895, 98% of the moths were black.

The time it takes for evolution to occur depends simply on A. The frequency of a trait in the population and B. How strongly you are selecting for (or against) it.

Let’s break this down a little bit. Within a species, there exists a great deal of genetic variation. Some of this variation happens because two parents with different genes get together and produce offspring with a combination of their genes. Some of this variation happens because of random errors–mutations–that occur during copying of the genetic code. Much of the “natural variation” we see today started as some kind of error that proved to be useful, or at least not harmful. For example, all humans originally had dark skin similar to modern Africans’, but random mutations in some of the folks who no longer lived in Africa gave them lighter skin, eventually producing “white” and “Asian” skin tones.

(These random mutations also happen in Africa, but there they are harmful and so don’t stick around.)

Natural selection can only act on the traits that are actually present in the population. If we tried to select for “ability to shoot x-ray lasers from our eyes,” we wouldn’t get very far, because no one actually has that mutation. By contrast, albinism is rare, but it definitely exists, and if for some reason we wanted to select for it, we certainly could. (The incidence of albinism among the Hopi Indians is high enough–1 in 200 Hopis vs. 1 in 20,000 Europeans generally and 1 in 30,000 Southern Europeans–for scientists to discuss whether the Hopi have been actively selecting for albinism. This still isn’t a lot of albinism, but since the American Southwest is not a good environment for pale skin, it’s something.)

You will have a much easier time selecting for traits that crop up more frequently in your population than traits that crop up rarely (or never).

Second, we have intensity–and variety–of selective pressure. What % of your population is getting removed by natural selection each year? If 50% of your moths get eaten by birds because they’re too light, you’ll get a much faster change than if only 10% of moths get eaten.

Selection doesn’t have to involve getting eaten, though. Perhaps some of your moths are moth Lotharios, seducing all of the moth ladies with their fuzzy antennae. Over time, the moth population will develop fuzzier antennae as these handsome males out-reproduce their less hirsute cousins.

No matter what kind of selection you have, nor what part of your curve it’s working on, all that ultimately matters is how many offspring each individual has. If white moths have more children than black moths, then you end up with more white moths. If black moths have more babies, then you get more black moths.

Source SUPS.org

So what happens when you completely remove selective pressures from a population?

Back in 1968, ethologist John B. Calhoun set up an experiment popularly called “Mouse Utopia.” Four pairs of mice were given a large, comfortable habitat with no predators and plenty of food and water.

Predictably, the mouse population increased rapidly–once the mice were established in their new homes, their population doubled every 55 days. But after 211 days of explosive growth, reproduction began–mysteriously–to slow. For the next 245 days, the mouse population doubled only once every 145 days.

The birth rate continued to decline. As births and death reached parity, the mouse population stopped growing. Finally the last breeding female died, and the whole colony went extinct.

source

As I’ve mentioned before Israel is (AFAIK) the only developed country in the world with a TFR above replacement.

It has long been known that overcrowding leads to population stress and reduced reproduction, but overcrowding can only explain why the mouse population began to shrink–not why it died out. Surely by the time there were only a few breeding pairs left, things had become comfortable enough for the remaining mice to resume reproducing. Why did the population not stabilize at some comfortable level?

Professor Bruce Charlton suggests an alternative explanation: the removal of selective pressures on the mouse population resulted in increasing mutational load, until the entire population became too mutated to reproduce.

What is genetic load?

As I mentioned before, every time a cell replicates, a certain number of errors–mutations–occur. Occasionally these mutations are useful, but the vast majority of them are not. About 30-50% of pregnancies end in miscarriage (the percent of miscarriages people recognize is lower because embryos often miscarry before causing any overt signs of pregnancy,) and the majority of those miscarriages are caused by genetic errors.

Unfortunately, randomly changing part of your genetic code is more likely to give you no skin than skintanium armor.

But only the worst genetic problems that never see the light of day. Plenty of mutations merely reduce fitness without actually killing you. Down Syndrome, famously, is caused by an extra copy of chromosome 21.

While a few traits–such as sex or eye color–can be simply modeled as influenced by only one or two genes, many traits–such as height or IQ–appear to be influenced by hundreds or thousands of genes:

Differences in human height is 60–80% heritable, according to several twin studies[19] and has been considered polygenic since the Mendelian-biometrician debate a hundred years ago. A genome-wide association (GWA) study of more than 180,000 individuals has identified hundreds of genetic variants in at least 180 loci associated with adult human height.[20] The number of individuals has since been expanded to 253,288 individuals and the number of genetic variants identified is 697 in 423 genetic loci.[21]

Obviously most of these genes each plays only a small role in determining overall height (and this is of course holding environmental factors constant.) There are a few extreme conditions–gigantism and dwarfism–that are caused by single mutations, but the vast majority of height variation is caused by which particular mix of those 700 or so variants you happen to have.

The situation with IQ is similar:

Intelligence in the normal range is a polygenic trait, meaning it’s influenced by more than one gene.[3][4]

The general figure for the heritability of IQ, according to an authoritative American Psychological Association report, is 0.45 for children, and rises to around 0.75 for late teens and adults.[5][6] In simpler terms, IQ goes from being weakly correlated with genetics, for children, to being strongly correlated with genetics for late teens and adults. … Recent studies suggest that family and parenting characteristics are not significant contributors to variation in IQ scores;[8] however, poor prenatal environment, malnutrition and disease can have deleterious effects.[9][10]

And from a recent article published in Nature Genetics, Genome-wide association meta-analysis of 78,308 individuals identifies new loci and genes influencing human intelligence:

Despite intelligence having substantial heritability2 (0.54) and a confirmed polygenic nature, initial genetic studies were mostly underpowered3, 4, 5. Here we report a meta-analysis for intelligence of 78,308 individuals. We identify 336 associated SNPs (METAL P < 5 × 10−8) in 18 genomic loci, of which 15 are new. Around half of the SNPs are located inside a gene, implicating 22 genes, of which 11 are new findings. Gene-based analyses identified an additional 30 genes (MAGMA P < 2.73 × 10−6), of which all but one had not been implicated previously. We show that the identified genes are predominantly expressed in brain tissue, and pathway analysis indicates the involvement of genes regulating cell development (MAGMA competitive P = 3.5 × 10−6). Despite the well-known difference in twin-based heritability2 for intelligence in childhood (0.45) and adulthood (0.80), we show substantial genetic correlation (rg = 0.89, LD score regression P = 5.4 × 10−29). These findings provide new insight into the genetic architecture of intelligence.

The greater number of genes influence a trait, the harder they are to identify without extremely large studies, because any small group of people might not even have the same set of relevant genes.

High IQ correlates positively with a number of life outcomes, like health and longevity, while low IQ correlates with negative outcomes like disease, mental illness, and early death. Obviously this is in part because dumb people are more likely to make dumb choices which lead to death or disease, but IQ also correlates with choice-free matters like height and your ability to quickly press a button. Our brains are not some mysterious entities floating in a void, but physical parts of our bodies, and anything that affects our overall health and physical functioning is likely to also have an effect on our brains.

Like height, most of the genetic variation in IQ is the combined result of many genes. We’ve definitely found some mutations that result in abnormally low IQ, but so far we have yet (AFAIK) to find any genes that produce the IQ gigantism. In other words, low (genetic) IQ is caused by genetic load–Small Yet Important Genetic Differences Between Highly Intelligent People and General Population:

The study focused, for the first time, on rare, functional SNPs – rare because previous research had only considered common SNPs and functional because these are SNPs that are likely to cause differences in the creation of proteins.

The researchers did not find any individual protein-altering SNPs that met strict criteria for differences between the high-intelligence group and the control group. However, for SNPs that showed some difference between the groups, the rare allele was less frequently observed in the high intelligence group. This observation is consistent with research indicating that rare functional alleles are more often detrimental than beneficial to intelligence.

Maternal mortality rates over time, UK data

Greg Cochran has some interesting Thoughts on Genetic Load. (Currently, the most interesting candidate genes for potentially increasing IQ also have terrible side effects, like autism, Tay Sachs and Torsion Dystonia. The idea is that–perhaps–if you have only a few genes related to the condition, you get an IQ boost, but if you have too many, you get screwed.) Of course, even conventional high-IQ has a cost: increased maternal mortality (larger heads).

Wikipedia defines genetic load as:

the difference between the fitness of an average genotype in a population and the fitness of some reference genotype, which may be either the best present in a population, or may be the theoretically optimal genotype. … Deleterious mutation load is the main contributing factor to genetic load overall.[5] Most mutations are deleterious, and occur at a high rate.

There’s math, if you want it.

Normally, genetic mutations are removed from the population at a rate determined by how bad they are. Really bad mutations kill you instantly, and so are never born. Slightly less bad mutations might survive, but never reproduce. Mutations that are only a little bit deleterious might have no obvious effect, but result in having slightly fewer children than your neighbors. Over many generations, this mutation will eventually disappear.

(Some mutations are more complicated–sickle cell, for example, is protective against malaria if you have only one copy of the mutation, but gives you sickle cell anemia if you have two.)

Jakubany is a town in the Carpathian Mountains

Throughout history, infant mortality was our single biggest killer. For example, here is some data from Jakubany, a town in the Carpathian Mountains:

We can see that, prior to the 1900s, the town’s infant mortality rate stayed consistently above 20%, and often peaked near 80%.

The graph’s creator states:

When I first ran a calculation of the infant mortality rate, I could not believe certain of the intermediate results. I recompiled all of the data and recalculated … with the same astounding result – 50.4% of the children born in Jakubany between the years 1772 and 1890 would diebefore reaching ten years of age! …one out of every two! Further, over the same 118 year period, of the 13306 children who were born, 2958 died (~22 %) before reaching the age of one.

Historical infant mortality rates can be difficult to calculate in part because they were so high, people didn’t always bother to record infant deaths. And since infants are small and their bones delicate, their burials are not as easy to find as adults’. Nevertheless, Wikipedia estimates that Paleolithic man had an average life expectancy of 33 years:

Based on the data from recent hunter-gatherer populations, it is estimated that at 15, life expectancy was an additional 39 years (total 54), with a 0.60 probability of reaching 15.[12]

Priceonomics: Why life expectancy is misleading

In other words, a 40% chance of dying in childhood. (Not exactly the same as infant mortality, but close.)

Wikipedia gives similarly dismal stats for life expectancy in the Neolithic (20-33), Bronze and Iron ages (26), Classical Greece(28 or 25), Classical Rome (20-30), Pre-Columbian Southwest US (25-30), Medieval Islamic Caliphate (35), Late Medieval English Peerage (30), early modern England (33-40), and the whole world in 1900 (31).

Over at ThoughtCo: Surviving Infancy in the Middle Ages, the author reports estimates for between 30 and 50% infant mortality rates. I recall a study on Anasazi nutrition which I sadly can’t locate right now, which found 100% malnutrition rates among adults (based on enamel hypoplasias,) and 50% infant mortality.

As Priceonomics notes, the main driver of increasing global life expectancy–48 years in 1950 and 71.5 years in 2014 (according to Wikipedia)–has been a massive decrease in infant mortality. The average life expectancy of an American newborn back in 1900 was only 47 and a half years, whereas a 60 year old could expect to live to be 75. In 1998, the average infant could expect to live to about 75, and the average 60 year old could expect to live to about 80.

Back in his post on Mousetopia, Charlton writes:

Michael A Woodley suggests that what was going on [in the Mouse experiment] was much more likely to be mutation accumulation; with deleterious (but non-fatal) genes incrementally accumulating with each generation and generating a wide range of increasingly maladaptive behavioural pathologies; this process rapidly overwhelming and destroying the population before any beneficial mutations could emerge to ‘save; the colony from extinction. …

The reason why mouse utopia might produce so rapid and extreme a mutation accumulation is that wild mice naturally suffer very high mortality rates from predation. …

Thus mutation selection balance is in operation among wild mice, with very high mortality rates continually weeding-out the high rate of spontaneously-occurring new mutations (especially among males) – with typically only a small and relatively mutation-free proportion of the (large numbers of) offspring surviving to reproduce; and a minority of the most active and healthy (mutation free) males siring the bulk of each generation.

However, in Mouse Utopia, there is no predation and all the other causes of mortality (eg. Starvation, violence from other mice) are reduced to a minimum – so the frequent mutations just accumulate, generation upon generation – randomly producing all sorts of pathological (maladaptive) behaviours.

Historically speaking, another selective factor operated on humans: while about 67% of women reproduced, only 33% of men did. By contrast, according to Psychology Today, a majority of today’s men have or will have children.

Today, almost everyone in the developed world has plenty of food, a comfortable home, and doesn’t have to worry about dying of bubonic plague. We live in humantopia, where the biggest factor influencing how many kids you have is how many you want to have.

source

Back in 1930, infant mortality rates were highest among the children of unskilled manual laborers, and lowest among the children of professionals (IIRC, this is Brittish data.) Today, infant mortality is almost non-existent, but voluntary childlessness has now inverted this phenomena:

Yes, the percent of childless women appears to have declined since 1994, but the overall pattern of who is having children still holds. Further, while only 8% of women with post graduate degrees have 4 or more children, 26% of those who never graduated from highschool have 4+ kids. Meanwhile, the age of first-time moms has continued to climb.

In other words, the strongest remover of genetic load–infant mortality–has all but disappeared; populations with higher load (lower IQ) are having more children than populations with lower load; and everyone is having children later, which also increases genetic load.

Take a moment to consider the high-infant mortality situation: an average couple has a dozen children. Four of them, by random good luck, inherit a good combination of the couple’s genes and turn out healthy and smart. Four, by random bad luck, get a less lucky combination of genes and turn out not particularly healthy or smart. And four, by very bad luck, get some unpleasant mutations that render them quite unhealthy and rather dull.

Infant mortality claims half their children, taking the least healthy. They are left with 4 bright children and 2 moderately intelligent children. The three brightest children succeed at life, marry well, and end up with several healthy, surviving children of their own, while the moderately intelligent do okay and end up with a couple of children.

On average, society’s overall health and IQ should hold steady or even increase over time, depending on how strong the selective pressures actually are.

Or consider a consanguineous couple with a high risk of genetic birth defects: perhaps a full 80% of their children die, but 20% turn out healthy and survive.

Today, by contrast, your average couple has two children. One of them is lucky, healthy, and smart. The other is unlucky, unhealthy, and dumb. Both survive. The lucky kid goes to college, majors in underwater intersectionist basket-weaving, and has one kid at age 40. That kid has Down Syndrome and never reproduces. The unlucky kid can’t keep a job, has chronic health problems, and 3 children by three different partners.

Your consanguineous couple migrates from war-torn Somalia to Minnesota. They still have 12 kids, but three of them are autistic with IQs below the official retardation threshold. “We never had this back in Somalia,” they cry. “We don’t even have a word for it.”

People normally think of dysgenics as merely “the dumb outbreed the smart,” but genetic load applies to everyone–men and women, smart and dull, black and white, young and especially old–because we all make random transcription errors when copying our DNA.

I could offer a list of signs of increasing genetic load, but there’s no way to avoid cherry-picking trends I already know are happening, like falling sperm counts or rising (diagnosed) autism rates, so I’ll skip that. You may substitute your own list of “obvious signs society is falling apart at the genes” if you so desire.

Nevertheless, the transition from 30% (or greater) infant mortality to almost 0% is amazing, both on a technical level and because it heralds an unprecedented era in human evolution. The selective pressures on today’s people are massively different from those our ancestors faced, simply because our ancestors’ biggest filter was infant mortality. Unless infant mortality acted completely at random–taking the genetically loaded and unloaded alike–or on factors completely irrelevant to load, the elimination of infant mortality must continuously increase the genetic load in the human population. Over time, if that load is not selected out–say, through more people being too unhealthy to reproduce–then we will end up with an increasing population of physically sick, maladjusted, mentally ill, and low-IQ people.

(Remember, all mental traits are heritable–so genetic load influences everything, not just controversial ones like IQ.)

If all of the above is correct, then I see only 4 ways out:

  1. Do nothing: Genetic load increases until the population is non-functional and collapses, resulting in a return of Malthusian conditions, invasion by stronger neighbors, or extinction.
  2. Sterilization or other weeding out of high-load people, coupled with higher fertility by low-load people
  3. Abortion of high load fetuses
  4. Genetic engineering

#1 sounds unpleasant, and #2 would result in masses of unhappy people. We don’t have the technology for #4, yet. I don’t think the technology is quite there for #2, either, but it’s much closer–we can certainly test for many of the deleterious mutations that we do know of.