The late Henry Harpending of West Hunter blog, along with Greg Cochran, wrote the 10,000 Year Explosion, did anthropological field work among the Ju/’hoansi, and pioneered population genetics. The biography has many interesting parts:
Henry’s early research on population genetics also helped establish the close relationship between genetics and geography. Genetic differences between groups tend to mirror the geographic distance between them, so that a map of genetic distances looks like a geographic map (Harpending and Jenkins, 1973). Henry developed methods for studying this relationship that are still in use. …
Meanwhile, Henry’s Kalahari field experience also motivated an interest in population ecology. Humans cope with variation in resource supply either by storage (averaging over time) or by mobility and sharing (averaging over space). These strategies are mutually exclusive. Those who store must defend their stored resources against others who would like to share them. Conversely, an ethic of sharing makes storage impossible. The contrast between the mobile and the sedentary Ju/’hoansi in Henry’s sample therefore represented a fundamental shift in strategy. …
Diseases need time to cause lesions on bone. If the infected individual dies quickly, no lesion will form, and the skeleton will look healthy. Lesions form only if the infected individual is healthy enough to survive for an extended period. Lesions on ancient bone may therefore imply that the population was healthy! …
In the 1970s, as Henry’s interest in genetic data waned, he began developing population genetic models of social evolution. He overturned 40 years of conventional wisdom by showing that group selection works best not when groups are isolated but when they are strongly connected by gene flow (1980, pp. 58-59; Harpending and Rogers, 1987). When gene flow is restricted, successful mutants cannot spread beyond the initial group, and group selection stalls.
Human DNA varies across geographic regions, with most variation observed so far reflecting distant ancestry differences. Here, we investigate the geographic clustering of genetic variants that influence complex traits and disease risk in a sample of ~450,000 individuals from Great Britain. Out of 30 traits analyzed, 16 show significant geographic clustering at the genetic level after controlling for ancestry, likely reflecting recent migration driven by socio-economic status (SES). Alleles associated with educational attainment (EA) show most clustering, with EA-decreasing alleles clustering in lower SES areas such as coal mining areas. Individuals that leave coal mining areas carry more EA-increasing alleles on average than the rest of Great Britain. In addition, we leveraged the geographic clustering of complex trait variation to further disentangle regional differences in socio-economic and cultural outcomes through genome-wide association studies on publicly available regional measures, namely coal mining, religiousness, 1970/2015 general election outcomes, and Brexit referendum results.
Let’s hope no one reports on this as “They found the Brexit gene!”
The northern United States long served as a land of opportunity for black Americans, but today the region’s racial gap in intergenerational mobility rivals that of the South. I show that racial composition changes during the peak of the Great
Migration (1940-1970) reduced upward mobility in northern cities in the long run,
with the largest effects on black men. I identify urban black population increases
during the Migration at the commuting zone level using a shift-share instrument,
interacting pre-1940 black southern migrant location choices with predicted outmigration from southern counties. The Migration’s negative effects on children’s
adult outcomes appear driven by neighborhood factors, not changes in the characteristics of the average child. As early as the 1960s, the Migration led to greater white enrollment in private schools, increased spending on policing, and higher crime and incarceration rates. I estimate that the overall change in childhood environment induced by the Great Migration explains 43% of the upward mobility gap between black and white men in the region today.
43% is huge and, IMO, too big. However, the author may be on to something.
Mycobacterium tuberculosis (M.tb) is a globally distributed, obligate pathogen of humans that can be divided into seven clearly defined lineages. … We reconstructed M.tb migration in Africa and Eurasia, and investigated lineage specific patterns of spread. Applying evolutionary rates inferred with ancient M.tb genome calibration, we link M.tb dispersal to historical phenomena that altered patterns of connectivity throughout Africa and Eurasia: trans-Indian Ocean trade in spices and other goods, the Silk Road and its predecessors, the expansion of the Roman Empire and the European Age of Exploration. We find that Eastern Africa and Southeast Asia have been critical in the dispersal of M.tb.
I spend a surprising amount of time reading about mycobacteria.
Do people eventually grow ideologically resistant to dangerous local memes, but remain susceptible to foreign memes, allowing them to spread like invasive species?
And if so, can we find some way to memetically vaccinate ourselves against deadly ideas?
Memetics is the study of how ideas (“memes”) spread and evolve, using evolutionary theory and epidemiology as models. A “viral meme” is one that spreads swiftly through society, “infecting” minds as it goes.
Of course, most memes are fairly innocent (e.g. fashion trends) or even beneficial (“wash your hands before eating to prevent disease transmission”), but some ideas, like communism, kill people.
Ideologies consist of a big set of related ideas rather than a single one, so let’s call them memeplexes.
Almost all ideological memeplexes (and religions) sound great on paper–they have to, because that’s how they spread–but they are much more variable in actual practice.
Any idea that causes its believers to suffer is unlikely to persist–at the very least, because its believers die off.
Over time, in places where people have been exposed to ideological memeplexes, their worst aspects become known and people may learn to avoid them; the memeplexes themselves can evolve to be less harmful.
Over in epidemiology, diseases humans have been exposed to for a long time become less virulent as humans become adapted to them. Chickenpox, for example, is a fairly mild disease that kills few people because the virus has been infecting people for as long as people have been around (the ancestral Varicella-Zoster virus evolved approximately 65 million years ago and has been infecting animals ever since). Rather than kill you, chickenpox prefers to enter your nerves and go dormant for decades, reemerging later as shingles, ready to infect new people.
By contrast, smallpox (Variola major and Variola minor) probably evolved from a rodent-infecting virus about 16,000 to 68,000 years ago. That’s a big range, but either way, it’s much more recent than chickenpox. Smallpox made its first major impact on the historical record around the third century BC, Egypt, and thereafter became a recurring plague in Africa and Eurasia. Note that unlike chickenpox, which is old enough to have spread throughout the world with humanity, smallpox emerged long after major population splits occurred–like part of the Asian clade splitting off and heading into the Americas.
By 1400, Europeans had developed some immunity to smallpox (due to those who didn’t have any immunity dying), but when Columbus landed in the New World, folks here had had never seen the disease before–and thus had no immunity. Diseases like smallpox and measles ripped through native communities, killing approximately 90% of the New World population.
If we extend this metaphor back to ideas–if people have been exposed to an ideology for a long time, they are more likely to have developed immunity to it or the ideology to have adapted to be relatively less harmful than it initially was. For example, the Protestant Reformation and subsequent Catholic counter-reformation triggered a series of European wars that killed 10 million people, but today Catholics and Protestants manage to live in the same countries without killing each other. New religions are much more likely to lead all of their followers in a mass suicide than old, established religions; countries that have just undergone a political revolution are much more likely to kill off large numbers of their citizens than ones that haven’t.
This is not to say that old ideas are perfect and never harmful–chickenpox still kills people and is not a fun disease–but that any bad aspects are likely to become more mild over time as people wise up to bad ideas, (certain caveats applying).
But this process only works for ideas that have been around for a long time. What about new ideas?
You can’t stop new ideas. Technology is always changing. The world is changing, and it requires new ideas to operate. When these new ideas arrive, even terrible ones can spread like wildfire because people have no memetic antibodies to resist them. New memes, in short, are like invasive memetic species.
In the late 1960s, 15 million people still caught smallpox every year. In 1980, it was declared officially eradicated–not one case had been seen since 1977, due to a massive, world-wide vaccination campaign.
Humans can acquire immunity to disease in two main ways. The slow way is everyone who isn’t immune dying; everyone left alive happens to have adaptations that let them not die, which they can pass on to their children. As with chickenpox, over generations, the disease becomes less severe because humans become successively more adapted to it.
The fast way is to catch a disease, produce antibodies that recognize and can fight it off, and thereafter enjoy immunity. This, of course, assumes that you survive the disease.
Vaccination works by teaching body’s immune system to recognize a disease without infecting it with a full-strength germ, using a weakened or harmless version of the germ, instead. Early on, weakened germs from actual smallpox scabs or lesions to inoculate people, a risky method since the germs often weren’t that weak. Later, people discovered that cowpox was similar enough to smallpox that its antibodies could also fight smallpox, but cowpox itself was too adapted to cattle hosts to seriously harm humans. (Today I believe the vaccine uses a different weakened virus, but the principle is the same.)
The good part about memes is that you do not actually have to inject a physical substance into your body in order to learn about them.
Ideologies are very difficult to evaluate in the abstract, because, as mentioned, they are all optimized to sound good on paper. It’s their actual effects we are interested in.
So if we want to learn whether an idea is good or not, it’s probably best not to learn about it by merely reading books written by its advocates. Talk to people in places where the ideas have already been tried and learn from their experiences. If those people tell you this ideology causes mass suffering and they hate it, drop it like a hot potato. If those people are practicing an “impure” version of the ideology, it’s probably an improvement over the original.
For example, “communism” as practiced in China today is quite different from “communism” as practiced there 50 years ago–so much so that the modern system really isn’t communism at all. There was never, to my knowledge, an official changeover from one system to another, just a gradual accretion of improvements. This speaks strongly against communism as an ideology, since no country has managed to be successful by moving toward ideological communist purity, only by moving away from it–though they may still find it useful to retain some of communism’s original ideas.
I think there is a similar dynamic occurring in many Islamic countries. Islam is a relatively old religion that has had time to adapt to local conditions in many different parts of the world. For example, in Morocco, where the climate is more favorable to raising pigs than in other parts of the Islamic world, the taboo against pigs isn’t as strongly observed. The burka is not an Islamic universal, but characteristic of central Asia (the similar niqab is from Yemen). Islamic head coverings vary by culture–such as this kurhars, traditionally worn by unmarried women in Ingushetia, north of the Caucuses, or this cap, popular in Xianjiang. Turkey has laws officially restricting burkas in some areas, and Syria discourages even hijabs. Women in Iran did not go heavily veiled prior to the Iranian Revolution. So the insistence on extensive veiling in many Islamic communities (like the territory conquered by ISIS) is not a continuation of old traditions, but the imposition of a new, idealized, version of Islam.
Purity is counter to practicality.
Of course, this approach is hampered by the fact that what works in one place, time, and community may not work in a different one. Tilling your fields one way works in Europe, and tilling them a different way works in Papua New Guinea. But extrapolating from what works is at least a good start.
While reading about the conditions in a Burmese prison around the turn of the previous century (The History and Romance of Crime: Oriental Prisons, by Arthur Griffiths)(not good) it occurred to me that there might have been some beneficial effect of the large amounts of tobacco smoke inside the prison. Sure, in the long run, tobacco is highly likely to give you cancer, but in the short run, is it noxious to fleas and other disease-bearing pests?
Meanwhile in Melanesia, (Pygmies and Papuans,) a group of ornithologists struggled up a river to reach an almost completely isolated tribe of Melanesians that barely practiced horticulture; even further up the mountain they met a band of pygmies (negritoes) whose existence had only been rumored of; the pygmies cultivated tobacco, which they traded with their otherwise not terribly interested in trading for worldy goods neighbors.
The homeless smoke at rates 3x higher than the rest of the population, though this might have something to do with the high correlation between schizophrenia and smoking–80% of schizophrenics smoke, compared to 20% of the general population. Obviously this correlation is best explained by tobacco’s well-noted psychological effects (including addiction,) but why is tobacco so ubiquitous in prisons that cigarettes are used as currency? Could they have, in unsanitary conditions, some healthful purpose?
On average, the more THC byproduct that Hagen’s team found in an Aka man’s urine, the fewer worm eggs were present in his gut.
“The heaviest smokers, with everything else being equal, had about half the number of parasitic eggs in their stool, compared to everyone else,” Hagen says. …
THC — and nicotine — are known to kill intestinal worms in a Petri dish. And many worms make their way to the gut via the lungs. “The worms’ larval stage is in the lung,” Hagan says. “When you smoke you just blast them with THC or nicotine directly.”
Smoking kills. But if you’re a bird and if you want to kill parasites, that can be a good thing. City birds have taken to stuffing their nests with cigarette butts to poison potential parasites. Nature reports:
“In a study published today in Biology Letters, the researchers examined the nests of two bird species common on the North American continent. They measured the amount of cellulose acetate (a component of cigarette butts) in the nests, and found that the more there was, the fewer parasitic mites the nest contained.”
Out in the State of Nature, parasites are extremely common and difficult to get rid of (eg, hookworm elimination campaigns in the early 1900s found that 40% of school-aged children were infected); farmers can apparently use tobacco as a natural de-wormer (but be careful, as tobacco can be poisonous.)
In the pre-modern environment, when many people had neither shoes, toilets, nor purified water, parasites were very hard to avoid.
Befoundalive recommends eating the tobacco from a cigarette if you have intestinal parasites and no access to modern medicine.
Overall, 8 intestinal parasite species have been recovered singly or in combinations from 146 (61.8 %) samples. The prevalence in prison population (88/121 = 72.7%) was significantly higher than that in tobacco farm (58/115 = 50.4%).
Because of developing resistance to the existing anthelmintic drugs, there is a need for new anthelmintic agents. Tobacco plant has alkaloid materials that have antiparasitic effect. We investigated the in vitro anthelminthic effect of aqueous and alcoholic extract of Tobacco (Nicotiana tabacum) against M. marshalli. … Overall, extracts of Tobacco possess considerable anthelminthic activity and more potent effects were observed with the highest concentrations. Therefore, the in vivo study on Tobocco in animal models is recommended.
(Helminths are parasites; anthelmintic=anti-parasites.)
So it looks like, at least in the pre-sewers and toilets and clean water environment when people struggled to stay parasite free, tobacco (and certain other drugs) may have offered people an edge over the pests. (I’ve noticed many bitter or noxious plants seem to have been useful for occasionally flushing out parasites, but you certainly don’t want to be in a state of “flush” all the time.)
It looks like it was only when regular sanitation got good enough that we didn’t have to worry about parasites anymore that people started getting really concerned with tobacco’s long-term negative effects on humans.
Crohn‘s is an inflammatory disease of the digestive tract involving diarrhea, vomiting internal lesions, pain, and severe weight loss. Left untreated, Crohn’s can lead to death through direct starvation/malnutrition, infections caused by the intestinal walls breaking down and spilling feces into the rest of the body, or a whole host of other horrible symptoms, like pyoderma gangrenosum–basically your skin just rotting off.
Crohn’s disease has no known cause and no cure, though several treatments have proven effective at putting it into remission–at least temporarily.
The disease appears to be triggered by a combination of environmental, bacterial, and genetic factors–about 70 genes have been identified so far that appear to contribute to an individual’s chance of developing Crohn’s, but no gene has been found yet that definitely triggers it. (The siblings of people who have Crohn’s are more likely than non-siblings to also have it, and identical twins of Crohn’s patients have a 55% chance of developing it.) A variety of environmental factors, such as living in a first world country, (parasites may be somewhat protective against the disease), smoking, or eating lots of animal protein also correlate with Crohn’s, but since only 3.2/1000 people even in the West have it’s, these obviously don’t trigger the disease in most people.
Crohn’s appears to be a kind of over-reaction of the immune system, though not specifically an auto-immune disorder, which suggests that a pathogen of some sort is probably involved. Most people are probably able to fight off this pathogen, but people with a variety of genetic issues may have more trouble–according to Wikipedia, “There is considerable overlap between susceptibility loci for IBD and mycobacterial infections. ” Mycobacteria are a genus of of bacteria that includes species like tuberculosis and leprosy. A variety of bacteria–including specific strains of e coli, yersinia, listeria, and Mycobacterium avium subspecies paratuberculosis–are found in the intestines of Crohn’s suffers at higher rates than in the intestines of non-sufferers (intestines, of course, are full of all kinds of bacteria.)
Crohn’s treatment depends on the severity of the case and specific symptoms, but often includes a course of antibiotics, (especially if the patient has abscesses,) tube feeding (in acute cases where the sufferer is having trouble digesting food,) and long-term immune-system suppressants such as prednisone, methotrexate, or infliximab. In severe cases, damaged portions of the intestines may be cut out. Before the development of immunosuppressant treatments, sufferers often progressively lost more and more of their intestines, with predictably unpleasant results, like no longer having a functioning colon. (70% of Crohn’s sufferers eventually have surgery.)
A similar disease, Johne’s, infects cattle. Johne’s is caused by Mycobacterium avium subspecies paratuberculosis, (hereafter just MAP). MAP typically infects calves at birth, transmitted via infected feces from their mothers, incubates for two years, and then manifests as diarrhea, malnutrition, dehydration, wasting, starvation, and death. Luckily for cows, there’s a vaccine, though any infectious disease in a herd is a problem for farmers.
If you’re thinking that “paratuberculosis” sounds like “tuberculosis,” you’re correct. When scientists first isolated it, they thought the bacteria looked rather like tuberculosis, hence the name, “tuberculosis-like.” The scientists’ instincts were correct, and it turns out that MAP is in the same bacterial genus as tuberculosis and leprosy (though it may be more closely related to leprosy than TB.) (“Genus” is one step up from “species;” our species is “homo Sapiens;” our genus, homo, we share with homo Neanderthalis, homo Erectus, etc, but chimps and gorillas are not in the homo genus.)
The intestines of cattle who have died of MAP look remarkably like the intestines of people suffering from advanced Crohn’s disease.
MAP can actually infect all sorts of mammals, not just cows, it’s just more common and problematic in cattle herds. (Sorry, we’re not getting through this post without photos of infected intestines.)
So here’s how it could work:
The MAP bacteria–possibly transmitted via milk or meat products–is fairly common and infects a variety of mammals. Most people who encounter it fight it off with no difficulty (or perhaps have a short bout of diarrhea and then recover.)
A few people, though, have genetic issues that make it harder for them to fight off the infection. For example, Crohn’s sufferers produce less intestinal mucus, which normally acts as a barrier between the intestines and all of the stuff in them.
Interestingly, parasite infections can increase intestinal mucus (some parasites feed on mucus), which in turn is protective against other forms of infection; decreasing parasite load can increase the chance of other intestinal infections.
Once MAP enters the intestinal walls, the immune system attempts to fight it off, but a genetic defect in microphagy results in the immune cells themselves getting infected. The body responds to the signs of infection by sending more immune cells to fight it, which subsequently also get infected with MAP, triggering the body to send even more immune cells. These lumps of infected cells become the characteristic ulcerations and lesions that mark Crohn’s disease and eventually leave the intestines riddled with inflamed tissue and holes.
The most effective treatments for Crohn’s, like Infliximab, don’t target infection but the immune system. They work by interrupting the immune system’s feedback cycle so that it stops sending more cells to the infected area, giving the already infected cells a chance to die. It doesn’t cure the disease, but it does give the intestines time to recover.
There were 70 reported cases of tuberculosis after treatment with infliximab for a median of 12 weeks. In 48 patients, tuberculosis developed after three or fewer infusions. … Of the 70 reports, 64 were from countries with a low incidence of tuberculosis. The reported frequency of tuberculosis in association with infliximab therapy was much higher than the reported frequency of other opportunistic infections associated with this drug. In addition, the rate of reported cases of tuberculosis among patients treated with infliximab was higher than the available background rates.
because it is actively suppressing the immune system’s ability to fight diseases in the TB family.
Luckily, if you live in the first world and aren’t in prison, you’re unlikely to catch TB–only about 5-10% of the US population tests positive for TB, compared to 80% in many African and Asian countries. (In other words, increased immigration from these countries will absolutely put Crohn’s suffers at risk of dying.)
There are a fair number of similarities between Crohn’s, TB, and leprosy is that they are all very slow diseases that can take years to finally kill you. By contrast, other deadly diseases, like smallpox, cholera, and yersinia pestis (plague), spread and kill extremely quickly. Within about two weeks, you’ll definitely know if your plague infection is going to kill you or not, whereas you can have leprosy for 20 years before you even notice it.
Tuberculosis is classified as one of the granulomatous inflammatory diseases. Macrophages, T lymphocytes, B lymphocytes, and fibroblasts aggregate to form granulomas, with lymphocytes surrounding the infected macrophages. When other macrophages attack the infected macrophage, they fuse together to form a giant multinucleated cell in the alveolar lumen. The granuloma may prevent dissemination of the mycobacteria and provide a local environment for interaction of cells of the immune system. However, more recent evidence suggests that the bacteria use the granulomas to avoid destruction by the host’s immune system. … In many people, the infection waxes and wanes.
Crohn’s also waxes and wanes. Many sufferers experience flare ups of the disease, during which they may have to be hospitalized, tube fed, and put through another round of antibiotics or sectioning (surgical removal of the intestines) before they improve–until the disease flares up again.
Leprosy is also marked by lesions, though of course so are dozens of other diseases.
Note: Since Crohn’s is a complex, multi-factorial disease, there may be more than one bacteria or pathogen that could infect people and create similar results. Alternatively, Crohn’s sufferers may simply have intestines that are really bad at fighting off all sorts of diseases, as a side effect of Crohn’s, not a cause, resulting in a variety of unpleasant infections.
The MAP hypothesis suggests several possible treatment routes:
Improving the intestinal mucus, perhaps via parasites or medicines derived from parasites
Improving the intestinal microbe balance
Antibiotics that treat Map
Anti-MAP vaccine similar to the one for Johne’s disease in cattle
To determine how the worms could be our frenemies, Cadwell and colleagues tested mice with the same genetic defect found in many people with Crohn’s disease. Mucus-secreting cells in the intestines malfunction in the animals, reducing the amount of mucus that protects the gut lining from harmful bacteria. Researchers have also detected a change in the rodents’ microbiome, the natural microbial community in their guts. The abundance of one microbe, an inflammation-inducing bacterium in the Bacteroides group, soars in the mice with the genetic defect.
The researchers found that feeding the rodents one type of intestinal worm restored their mucus-producing cells to normal. At the same time, levels of two inflammation indicators declined in the animals’ intestines. In addition, the bacterial lineup in the rodents’ guts shifted, the team reports online today in Science. Bacteroides’s numbers plunged, whereas the prevalence of species in a different microbial group, the Clostridiales, increased. A second species of worm also triggers similar changes in the mice’s intestines, the team confirmed.
To check whether helminths cause the same effects in people, the scientists compared two populations in Malaysia: urbanites living in Kuala Lumpur, who harbor few intestinal parasites, and members of an indigenous group, the Orang Asli, who live in a rural area where the worms are rife. A type of Bacteroides, the proinflammatory microbes, predominated in the residents of Kuala Lumpur. It was rarer among the Orang Asli, where a member of the Clostridiales group was plentiful. Treating the Orang Asli with drugs to kill their intestinal worms reversed this pattern, favoring Bacteroides species over Clostridiales species, the team documented.
This sounds unethical unless they were merely tagging along with another team of doctors who were de-worming the Orangs for normal health reasons and didn’t intend on potentially inflicting Crohn’s on people. Nevertheless, it’s an interesting study.
At any rate, so far they haven’t managed to produce an effective medicine from parasites, possibly in part because people think parasites are icky.
But if parasites aren’t disgusting enough for you, there’s always the option of directly changing the gut bacteria: fecal microbiota transplants (FMT). A fecal transplant is exactly what it sounds like: you take the regular feces out of the patient and put in new, fresh feces from an uninfected donor. (When your other option is pooping into a bag for the rest of your life because your colon was removed, swallowing a few poop pills doesn’t sound so bad.) EG, Fecal microbiota transplant for refractory Crohn’s:
Approximately one-third of patients with Crohn’s disease do not respond to conventional treatments, and some experience significant adverse effects, such as serious infections and lymphoma, and many patients require surgery due to complications. .. Herein, we present a patient with Crohn’s colitis in whom biologic therapy failed previously, but clinical remission and endoscopic improvement was achieved after a single fecal microbiota transplantation infusion.
Antibiotics are another potential route. The Redhill Biopharma is conducting a phase III clinical study of antibiotics designed to fight MAP in Crohn’s patients. Redhill is expected to release some of their results in April.
Mechanism of action: The vaccine is what is called a ‘T-cell’ vaccine. T-cells are a type of white blood cell -an important player in the immune system- in particular, for fighting against organisms that hide INSIDE the body’s cells –like MAP does. Many people are exposed to MAP but most don’t get Crohn’s –Why? Because their T-cells can ‘see’ and destroy MAP. In those who do get Crohn’s, the immune system has a ‘blind spot’ –their T-cells cannot see MAP. The vaccine works by UN-BLINDING the immune system to MAP, reversing the immune dysregulation and programming the body’s own T-cells to seek out and destroy cells containing MAP. For general information, there are two informative videos about T Cells and the immune system below.
Efficacy: In extensive tests in animals (in mice and in cattle), 2 shots of the vaccine spaced 8 weeks apart proved to be a powerful, long-lasting stimulant of immunity against MAP. To read the published data from the trial in mice, click here. To read the published data from the trial in cattle, click here.
Dr. Borody (who was influential in the discovery that ulcers are caused by the h. pylori bacteria and not stress,) has had amazing success treating Crohn’s patients with a combination of infliximab, anti-MAP antibiotics, and hyperbaric oxygen. Here are two of his before and after photos of the intestines of a 31 yr old Crohn’s sufferer:
Here are some more interesting articles on the subject:
Last week, Davis and colleagues in the U.S. and India published a case report in Frontiers of Medicine http://journal.frontiersin.org/article/10.3389/fmed.2016.00049/full . The report described a single patient, clearly infected with MAP, with the classic features of Johne’s disease in cattle, including the massive shedding of MAP in his feces. The patient was also ill with clinical features that were indistinguishable from the clinical features of Crohn’s. In this case though, a novel treatment approach cleared the patient’s infection.
The patient was treated with antibiotics known to be effective for tuberculosis, which then eliminated the clinical symptoms of Crohn’s disease, too.
Through luck, hard work, good fortune, perseverance, and wonderful doctors, I seem to be one of the few people in the world who can claim to be “cured” of Crohn’s Disease. … In brief, I was treated for 6 years with medications normally used for multidrug resistant TB and leprosy, under the theory that a particular germ causes Crohn’s Disease. I got well, and have been entirely well since 2004. I do not follow a particular diet, and my recent colonoscopies and blood work have shown that I have no inflammation. The rest of these 3 blogs will explain more of the story.
What about removing Johne’s disease from the food supply? Assuming Johne’s is the culprit, this may be hard to do, (it’s pretty contagious in cattle, can lie dormant for years, and survives cooking) but drinking ultrapasteurized milk may be protective, especially for people who are susceptible to the disease.
However… there are also studies that contradict the MAP theory. For example, a recent study of the rate of Crohn’s disease in people exposed to Johne’s disease found no correllation. (However, Crohn’s is a pretty rare condition, and the survey only found 7 total cases, which is small enough that random chance could be a factor, but we are talking about people who probably got very up close and personal with feces infected with MAP.)
Logistic regression showed no significant association with measures of potential contamination of water sources with MAP, water intake, or water treatment. Multivariate analysis showed that consumption of pasteurized milk (per kg/month: odds ratio (OR) = 0.82, 95% confidence interval (CI): 0.69, 0.97) was associated with a reduced risk of Crohn’s disease. Meat intake (per kg/month: OR = 1.40, 95% CI: 1.17, 1.67) was associated with a significantly increased risk of Crohn’s disease, whereas fruit consumption (per kg/month: OR = 0.78, 95% CI: 0.67, 0.92) was associated with reduced risk.
So even if Crohn’s is caused by MAP or something similar, it appears that people aren’t catching it from milk.
There are other theories about what causes Crohn’s–these folks, for example, think it’s related to consumption of GMO corn. Perhaps MAP has only been found in the intestines of Crohn’s patients because people with Crohn’s are really bad at fighting off infections. Perhaps the whole thing is caused by weird gut bacteria, or not enough parasites, insufficient Vitamin D, or industrial pollution.
New tests on two ancient teeth found in a cave in Indonesia more than 120 years ago have established that early modern humans arrived in Southeast Asia at least 20,000 years earlier than scientists previously thought, according to a new study. …
The findings push back the date of the earliest known modern human presence in tropical Southeast Asia to between 63,000 and 73,000 years ago. The new study also suggests that early modern humans could have made the crossing to Australia much earlier than the commonly accepted time frame of 60,000 to 65,000 years ago.
I would like to emphasize that nothing based on a couple of teeth is conclusive, “settled,” or “proven” science. Samples can get contaminated, machines make errors, people play tricks–in the end, we’re looking for the weight of the evidence.
I am personally of the opinion that there were (at least) two ancient human migrations into south east Asia, but only time will tell if I am correct.
We investigated the genetic architecture of family relationship satisfaction and friendship satisfaction in the UK Biobank. …
In the DSM-55, difficulties in social functioning is one of the criteria for diagnosing conditions such as autism, anorexia nervosa, schizophrenia, and bipolar disorder. However, little is known about the genetic architecture of social relationship satisfaction, and if social relationship dissatisfaction genetically contributes to risk for psychiatric conditions. …
We present the results of a large-scale genome-wide association study of social
relationship satisfaction in the UK Biobank measured using family relationship satisfaction and friendship satisfaction. Despite the modest phenotypic correlations, there was a significant and high genetic correlation between the two phenotypes, suggesting a similar genetic architecture between the two phenotypes.
Note: the two “phenotypes” here are “family relationship satisfaction” and “friendship satisfaction.”
We first investigated if the two phenotypes were genetically correlated with
psychiatric conditions. As predicted, most if not all psychiatric conditions had a significant negative correlation for the two phenotypes. … We observed significant negative genetic correlation between the two phenotypes and a large cross-condition psychiatric GWAS38. This underscores the importance of social relationship dissatisfaction in psychiatric conditions. …
In other words, people with mental illnesses generally don’t have a lot of friends nor get along with their families.
One notable exception is the negative genetic correlation between measures of cognition and the two phenotypes. Whilst subjective wellbeing is positively genetically correlated with measures of cognition, we identify a small but statistically significant negative correlation between measures of correlation and the two phenotypes.
Are they saying that smart people have fewer friends? Or that dumber people are happier with their friends and families? I think they are clouding this finding in intentionally obtuse language.
A recent study highlighted that people with very high IQ scores tend to report lower satisfaction with life with more frequent socialization.
Oh, I think I read that one. It’s not the socialization per se that’s the problem, but spending time away from the smart person’s intellectual activities. For example, I enjoy discussing the latest genetics findings with friends, but I don’t enjoy going on family vacations because they are a lot of work that does not involve genetics. (This is actually something my relatives complain about.)
…alleles that increase the risk for schizophrenia are in the same haplotype as
alleles that decrease friendship satisfaction. The functional consequences of this locus must be formally tested. …
Loss of function mutations in these genes lead to severe biochemical consequences, and are implicated in several neuropsychiatric conditions. For
example, de novo loss of function mutations in pLI intolerant genes confers significant risk for autism. Our results suggest that pLI > 0.9 genes contribute to psychiatric risk through both common and rare genetic variation.
A species may live in relative equilibrium with its environment, hardly changing from generation to generation, for millions of years. Turtles, for example, have barely changed since the Cretaceous, when dinosaurs still roamed the Earth.
But if the environment changes–critically, if selective pressures change–then the species will change, too. This was most famously demonstrated with English moths, which changed color from white-and-black speckled to pure black when pollution darkened the trunks of the trees they lived on. To survive, these moths need to avoid being eaten by birds, so any moth that stands out against the tree trunks tends to get turned into an avian snack. Against light-colored trees, dark-colored moths stood out and were eaten. Against dark-colored trees, light-colored moths stand out.
This change did not require millions of years. Dark-colored moths were virtually unknown in 1810, but by 1895, 98% of the moths were black.
The time it takes for evolution to occur depends simply on A. The frequency of a trait in the population and B. How strongly you are selecting for (or against) it.
Let’s break this down a little bit. Within a species, there exists a great deal of genetic variation. Some of this variation happens because two parents with different genes get together and produce offspring with a combination of their genes. Some of this variation happens because of random errors–mutations–that occur during copying of the genetic code. Much of the “natural variation” we see today started as some kind of error that proved to be useful, or at least not harmful. For example, all humans originally had dark skin similar to modern Africans’, but random mutations in some of the folks who no longer lived in Africa gave them lighter skin, eventually producing “white” and “Asian” skin tones.
(These random mutations also happen in Africa, but there they are harmful and so don’t stick around.)
Natural selection can only act on the traits that are actually present in the population. If we tried to select for “ability to shoot x-ray lasers from our eyes,” we wouldn’t get very far, because no one actually has that mutation. By contrast, albinism is rare, but it definitely exists, and if for some reason we wanted to select for it, we certainly could. (The incidence of albinism among the Hopi Indians is high enough–1 in 200 Hopis vs. 1 in 20,000 Europeans generally and 1 in 30,000 Southern Europeans–for scientists to discuss whether the Hopi have been actively selecting for albinism. This still isn’t a lot of albinism, but since the American Southwest is not a good environment for pale skin, it’s something.)
You will have a much easier time selecting for traits that crop up more frequently in your population than traits that crop up rarely (or never).
Second, we have intensity–and variety–of selective pressure. What % of your population is getting removed by natural selection each year? If 50% of your moths get eaten by birds because they’re too light, you’ll get a much faster change than if only 10% of moths get eaten.
Selection doesn’t have to involve getting eaten, though. Perhaps some of your moths are moth Lotharios, seducing all of the moth ladies with their fuzzy antennae. Over time, the moth population will develop fuzzier antennae as these handsome males out-reproduce their less hirsute cousins.
No matter what kind of selection you have, nor what part of your curve it’s working on, all that ultimately matters is how many offspring each individual has. If white moths have more children than black moths, then you end up with more white moths. If black moths have more babies, then you get more black moths.
So what happens when you completely remove selective pressures from a population?
Back in 1968, ethologist John B. Calhoun set up an experiment popularly called “Mouse Utopia.” Four pairs of mice were given a large, comfortable habitat with no predators and plenty of food and water.
Predictably, the mouse population increased rapidly–once the mice were established in their new homes, their population doubled every 55 days. But after 211 days of explosive growth, reproduction began–mysteriously–to slow. For the next 245 days, the mouse population doubled only once every 145 days.
The birth rate continued to decline. As births and death reached parity, the mouse population stopped growing. Finally the last breeding female died, and the whole colony went extinct.
As I’ve mentioned before Israel is (AFAIK) the only developed country in the world with a TFR above replacement.
It has long been known that overcrowding leads to population stress and reduced reproduction, but overcrowding can only explain why the mouse population began to shrink–not why it died out. Surely by the time there were only a few breeding pairs left, things had become comfortable enough for the remaining mice to resume reproducing. Why did the population not stabilize at some comfortable level?
Professor Bruce Charlton suggests an alternative explanation: the removal of selective pressures on the mouse population resulted in increasing mutational load, until the entire population became too mutated to reproduce.
Unfortunately, randomly changing part of your genetic code is more likely to give you no skin than skintanium armor.
But only the worst genetic problems that never see the light of day. Plenty of mutations merely reduce fitness without actually killing you. Down Syndrome, famously, is caused by an extra copy of chromosome 21.
While a few traits–such as sex or eye color–can be simply modeled as influenced by only one or two genes, many traits–such as height or IQ–appear to be influenced by hundreds or thousands of genes:
Differences in human height is 60–80% heritable, according to several twin studies and has been considered polygenic since the Mendelian-biometrician debate a hundred years ago. A genome-wide association (GWA) study of more than 180,000 individuals has identified hundreds of genetic variants in at least 180 loci associated with adult human height. The number of individuals has since been expanded to 253,288 individuals and the number of genetic variants identified is 697 in 423 genetic loci.
Obviously most of these genes each plays only a small role in determining overall height (and this is of course holding environmental factors constant.) There are a few extreme conditions–gigantism and dwarfism–that are caused by single mutations, but the vast majority of height variation is caused by which particular mix of those 700 or so variants you happen to have.
The general figure for the heritability of IQ, according to an authoritative American Psychological Association report, is 0.45 for children, and rises to around 0.75 for late teens and adults. In simpler terms, IQ goes from being weakly correlated with genetics, for children, to being strongly correlated with genetics for late teens and adults. … Recent studies suggest that family and parenting characteristics are not significant contributors to variation in IQ scores; however, poor prenatal environment, malnutrition and disease can have deleterious effects.…
Despite intelligence having substantial heritability2 (0.54) and a confirmed polygenic nature, initial genetic studies were mostly underpowered3, 4, 5. Here we report a meta-analysis for intelligence of 78,308 individuals. We identify 336 associated SNPs (METAL P < 5 × 10−8) in 18 genomic loci, of which 15 are new. Around half of the SNPs are located inside a gene, implicating 22 genes, of which 11 are new findings. Gene-based analyses identified an additional 30 genes (MAGMA P < 2.73 × 10−6), of which all but one had not been implicated previously. We show that the identified genes are predominantly expressed in brain tissue, and pathway analysis indicates the involvement of genes regulating cell development (MAGMA competitive P = 3.5 × 10−6). Despite the well-known difference in twin-based heritability2 for intelligence in childhood (0.45) and adulthood (0.80), we show substantial genetic correlation (rg = 0.89, LD score regression P = 5.4 × 10−29). These findings provide new insight into the genetic architecture of intelligence.
The greater number of genes influence a trait, the harder they are to identify without extremely large studies, because any small group of people might not even have the same set of relevant genes.
High IQ correlates positively with a number of life outcomes, like health and longevity, while low IQ correlates with negative outcomes like disease, mental illness, and early death. Obviously this is in part because dumb people are more likely to make dumb choices which lead to death or disease, but IQ also correlates with choice-free matters like height and your ability to quickly press a button. Our brains are not some mysterious entities floating in a void, but physical parts of our bodies, and anything that affects our overall health and physical functioning is likely to also have an effect on our brains.
The study focused, for the first time, on rare, functional SNPs – rare because previous research had only considered common SNPs and functional because these are SNPs that are likely to cause differences in the creation of proteins.
The researchers did not find any individual protein-altering SNPs that met strict criteria for differences between the high-intelligence group and the control group. However, for SNPs that showed some difference between the groups, the rare allele was less frequently observed in the high intelligence group. This observation is consistent with research indicating that rare functional alleles are more often detrimental than beneficial to intelligence.
Greg Cochran has some interesting Thoughts on Genetic Load. (Currently, the most interesting candidate genes for potentially increasing IQ also have terrible side effects, like autism, Tay Sachs and Torsion Dystonia. The idea is that–perhaps–if you have only a few genes related to the condition, you get an IQ boost, but if you have too many, you get screwed.) Of course, even conventional high-IQ has a cost: increased maternal mortality (larger heads).
the difference between the fitness of an average genotype in a population and the fitness of some reference genotype, which may be either the best present in a population, or may be the theoretically optimal genotype. … Deleterious mutation load is the main contributing factor to genetic load overall. Most mutations are deleterious, and occur at a high rate.
There’s math, if you want it.
Normally, genetic mutations are removed from the population at a rate determined by how bad they are. Really bad mutations kill you instantly, and so are never born. Slightly less bad mutations might survive, but never reproduce. Mutations that are only a little bit deleterious might have no obvious effect, but result in having slightly fewer children than your neighbors. Over many generations, this mutation will eventually disappear.
(Some mutations are more complicated–sickle cell, for example, is protective against malaria if you have only one copy of the mutation, but gives you sickle cell anemia if you have two.)
Throughout history, infant mortality was our single biggest killer. For example, here is some data from Jakubany, a town in the Carpathian Mountains:
We can see that, prior to the 1900s, the town’s infant mortality rate stayed consistently above 20%, and often peaked near 80%.
When I first ran a calculation of the infant mortality rate, I could not believe certain of the intermediate results. I recompiled all of the data and recalculated … with the same astounding result – 50.4% of the children born in Jakubany between the years 1772 and 1890 would diebefore reaching ten years of age! …one out of every two! Further, over the same 118 year period, of the 13306 children who were born, 2958 died (~22 %) before reaching the age of one.
Historical infant mortality rates can be difficult to calculate in part because they were so high, people didn’t always bother to record infant deaths. And since infants are small and their bones delicate, their burials are not as easy to find as adults’. Nevertheless, Wikipedia estimates that Paleolithic man had an average life expectancy of 33 years:
Based on the data from recent hunter-gatherer populations, it is estimated that at 15, life expectancy was an additional 39 years (total 54), with a 0.60 probability of reaching 15.
In other words, a 40% chance of dying in childhood. (Not exactly the same as infant mortality, but close.)
Wikipedia gives similarly dismal stats for life expectancy in the Neolithic (20-33), Bronze and Iron ages (26), Classical Greece(28 or 25), Classical Rome (20-30), Pre-Columbian Southwest US (25-30), Medieval Islamic Caliphate (35), Late Medieval English Peerage (30), early modern England (33-40), and the whole world in 1900 (31).
Over at ThoughtCo: Surviving Infancy in the Middle Ages, the author reports estimates for between 30 and 50% infant mortality rates. I recall a study on Anasazi nutrition which I sadly can’t locate right now, which found 100% malnutrition rates among adults (based on enamel hypoplasias,) and 50% infant mortality.
As Priceonomics notes, the main driver of increasing global life expectancy–48 years in 1950 and 71.5 years in 2014 (according to Wikipedia)–has been a massive decrease in infant mortality. The average life expectancy of an American newborn back in 1900 was only 47 and a half years, whereas a 60 year old could expect to live to be 75. In 1998, the average infant could expect to live to about 75, and the average 60 year old could expect to live to about 80.
Michael A Woodley suggests that what was going on [in the Mouse experiment] was much more likely to be mutation accumulation; with deleterious (but non-fatal) genes incrementally accumulating with each generation and generating a wide range of increasingly maladaptive behavioural pathologies; this process rapidly overwhelming and destroying the population before any beneficial mutations could emerge to ‘save; the colony from extinction. …
The reason why mouse utopia might produce so rapid and extreme a mutation accumulation is that wild mice naturally suffer very high mortality rates from predation. …
Thus mutation selection balance is in operation among wild mice, with very high mortality rates continually weeding-out the high rate of spontaneously-occurring new mutations (especially among males) – with typically only a small and relatively mutation-free proportion of the (large numbers of) offspring surviving to reproduce; and a minority of the most active and healthy (mutation free) males siring the bulk of each generation.
However, in Mouse Utopia, there is no predation and all the other causes of mortality (eg. Starvation, violence from other mice) are reduced to a minimum – so the frequent mutations just accumulate, generation upon generation – randomly producing all sorts of pathological (maladaptive) behaviours.
Today, almost everyone in the developed world has plenty of food, a comfortable home, and doesn’t have to worry about dying of bubonic plague. We live in humantopia, where the biggest factor influencing how many kids you have is how many you want to have.
Back in 1930, infant mortality rates were highest among the children of unskilled manual laborers, and lowest among the children of professionals (IIRC, this is Brittish data.) Today, infant mortality is almost non-existent, but voluntary childlessness has now inverted this phenomena:
Yes, the percent of childless women appears to have declined since 1994, but the overall pattern of who is having children still holds. Further, while only 8% of women with post graduate degrees have 4 or more children, 26% of those who never graduated from highschool have 4+ kids. Meanwhile, the age of first-time moms has continued to climb.
Take a moment to consider the high-infant mortality situation: an average couple has a dozen children. Four of them, by random good luck, inherit a good combination of the couple’s genes and turn out healthy and smart. Four, by random bad luck, get a less lucky combination of genes and turn out not particularly healthy or smart. And four, by very bad luck, get some unpleasant mutations that render them quite unhealthy and rather dull.
Infant mortality claims half their children, taking the least healthy. They are left with 4 bright children and 2 moderately intelligent children. The three brightest children succeed at life, marry well, and end up with several healthy, surviving children of their own, while the moderately intelligent do okay and end up with a couple of children.
On average, society’s overall health and IQ should hold steady or even increase over time, depending on how strong the selective pressures actually are.
Or consider a consanguineous couple with a high risk of genetic birth defects: perhaps a full 80% of their children die, but 20% turn out healthy and survive.
Today, by contrast, your average couple has two children. One of them is lucky, healthy, and smart. The other is unlucky, unhealthy, and dumb. Both survive. The lucky kid goes to college, majors in underwater intersectionist basket-weaving, and has one kid at age 40. That kid has Down Syndrome and never reproduces. The unlucky kid can’t keep a job, has chronic health problems, and 3 children by three different partners.
Your consanguineous couple migrates from war-torn Somalia to Minnesota. They still have 12 kids, but three of them are autistic with IQs below the official retardation threshold. “We never had this back in Somalia,” they cry. “We don’t even have a word for it.”
People normally think of dysgenics as merely “the dumb outbreed the smart,” but genetic load applies to everyone–men and women, smart and dull, black and white, young and especially old–because we all make random transcription errors when copying our DNA.
I could offer a list of signs of increasing genetic load, but there’s no way to avoid cherry-picking trends I already know are happening, like falling sperm counts or rising (diagnosed) autism rates, so I’ll skip that. You may substitute your own list of “obvious signs society is falling apart at the genes” if you so desire.
Nevertheless, the transition from 30% (or greater) infant mortality to almost 0% is amazing, both on a technical level and because it heralds an unprecedented era in human evolution. The selective pressures on today’s people are massively different from those our ancestors faced, simply because our ancestors’ biggest filter was infant mortality. Unless infant mortality acted completely at random–taking the genetically loaded and unloaded alike–or on factors completely irrelevant to load, the elimination of infant mortality must continuously increase the genetic load in the human population. Over time, if that load is not selected out–say, through more people being too unhealthy to reproduce–then we will end up with an increasing population of physically sick, maladjusted, mentally ill, and low-IQ people.
If all of the above is correct, then I see only 4 ways out:
Do nothing: Genetic load increases until the population is non-functional and collapses, resulting in a return of Malthusian conditions, invasion by stronger neighbors, or extinction.
Sterilization or other weeding out of high-load people, coupled with higher fertility by low-load people
Abortion of high load fetuses
#1 sounds unpleasant, and #2 would result in masses of unhappy people. We don’t have the technology for #4, yet. I don’t think the technology is quite there for #2, either, but it’s much closer–we can certainly test for many of the deleterious mutations that we do know of.
Note: I am not a doctor nor any other kind of medical professional. This post is not intended to be medical advice, but a description of one person’s personal experience. Please consult with a real medical professional if you need advice about any medical problems. Thank you.
Hypoglycemia is a medical condition in which the sufferer has too little sugar (glucose) in their bloodstream, like an inverse diabetes. Diabetics suffer an inability to produce/absorb insulin, without which their cells cannot properly absorb glucose from the blood. Hypoglycemics over-produce/absorb insulin, driving too much sugar into the body and leaving too little in the blood.
There are actually two kinds of hypoglycemia–general low blood sugar, which can be caused by not having eaten recently, and reactive hypoglycemia, caused by the body producing too much insulin in response to a sugar spike.
What does hypoglycemia feel like?
It’s difficult to describe, and I make no claim that this is how other hypoglycemics feel, but for me it’s a combination of feeling like my heart is beating too hard and weakness in my limbs. I start feeling light-headed, shaky, and in extreme cases, can collapse and pass out.
It’s not fun.
So how do I know it’s not just psychosomatic?
The simple answer is that sometimes I start feeling nasty after eating something I was told “has no sugar,” check the label, and sure enough, there’s sugar.
By the way, “evaporated cane juice” IS SUGAR.
It took several years to piece all of the symptoms together and figure out that my light-headed fainting spells were a result of eating specific foods, and that I could effectively prevent them by changing my diet and making sure I eat regularly.
I don’t fancy doing experiments on myself by purposefully trying to trigger hypoglycemia, so my list of foods I avoid can’t be exact, but extrapolated based on what I’ve experienced:
More than a couple bites of any high-sugar item like ice cream, candy, cookies, chocolate, or flavored yogurt.
Yes, yogurt. Lots of people like to tout flavored yogurts as “health food.” Bollocks. They strip out the good, tasty fats and then try make it palatable again by loading it up with sugar, creating an abomination that makes me feel as nasty as a bowl of ice cream. “Health food” my butt.
I also avoid all sugary drinks, like soda and fruit juice.
Yes, fruit juice. Fruit juice is mostly fructose, a kind of sugar, and your body processes it into glucose just like other sugars. A cup or two of juice and I start feeling the effects, just like any other sugary thing.
(Note: the exact mechanism of sugar metabolism varies according to the chemical structure of the individual sugar, but all sugars get broken down into glucose. Fruit sugar is fructose, the same stuff as is in High Fructose Corn Syrup.)
I generally don’t have a problem eating fruit.
I don’t eat/drink products with fake sugars, like Diet Soda or sugar-free ice cream, on the grounds that I don’t really know how the body will ultimately react to these artificial chemicals and because I don’t want to develop a taste for sweet things. There’s a lot of habit involved in eating, and if I start craving sweets that I can’t have, I’m going to be a lot more miserable than if I drink a glass of water now and forgo a Diet Soda.
A quick digression about artificial foods: once upon a time, people were very concerned about saturated fats in their diets, so they started eating foods with laboratory-produced “trans fats” instead. The differences between regular fats and trans fats are chemical; the regular fat it’s based on is typically a liquid, (that is, an oil,) and essentially moving one of the molecules in the fat from one side to the other creates a room-temperature solid. The great thing about trans fats is that they’re shelf-stable–that is, they won’t go rancid quickly at room temperature–and can be made from plant oils instead of animals fats. (Plants are much cheaper to grow than animals.) The downside to trans fats is that our bodies aren’t quite sure how to digest them and incorporate them into cell membranes and they appear to ultimately give you cancer.
So… You were probably better off just frying things in lard like the Amish do than switching to margarine.
The moral of the story is that I am skeptical of lab-derived foods. They might be just fine, but I have plenty to eat and drink already so I don’t see any reason to take a chance.
Finally, I eat bananas, pasta, and cereals in moderation, and certainly not in the morning. These are all items with complex sugars, so they aren’t as bad as the pure sugar items, but I am cautious about them.
Yes, timing matters–your body absorbs sugars more quickly after fasting than when you’ve already eaten, which is why your mother always told you to eat your dinner first and desert second. My hypoglycemia is therefore worst in the morning, when I haven’t eaten yet. Back in the day I had about 20 to 30 minutes after waking up to get breakfast or else I would start getting shaky and weak and have to lie down and try to convince someone else to get me some breakfast. Likewise, if I ate the wrong things for breakfast, like sugary cereals or bananas, I also had to lie down afterwards.
I’ve since discovered that if I have a cup of coffee first thing in the morning, my blood sugar doesn’t crash and I have a much longer window in which to eat breakfast, so I have time to get the kids ready for school and then eat. I don’t know what exactly it is about the coffee that helps–is it just having a cup of liquid? Is it the milk I put in there? The coffee itself? All three together? I just know that it works.
As with all things food and diet related, it’s probably more useful to know what I can eat than what I don’t: Meat, milk, cheese, sandwiches, lasagna, nuts, peanut butter, potatoes with butter + cheese, beets, soup, soy, coconuts, pizza, most fruit, coffee, tea, etc.
It’s really not bad.
In the beginning, I was occasionally sad because I’d get dragged to the ice cream shop and watch everyone else eat ice cream while I couldn’t have any (technically I can have a couple of bites but they don’t sell it in that quantity.) But when eating something makes you feel really bad, you tend to stop wanting to eat it.
So long as I have my morning coffee, avoid sweets, and eat at regular intervals, I feel 100% fine. (And coffee excepted, this is what nutritionists say you’re supposed to do, anyway.) I don’t feel sick, I don’t feel weak or dizzy, I don’t shake, etc.
The only problem, such as it is, is that I live in a society that assumes I can eat sugar and assumes that I am concerned about diabetes and gaining weight. Every pregnancy, for example, doctors try to test me for gestational diabetes. The gestational diabetes test involves fasting, drinking a bottle of pure glucose, and then seeing what my insulin levels do. I have yet to talk to any ob-gyn (or midwife practice) with policies in place for handling hypoglycemic patients. Every single one has a blanket policy of making all of their patients drink bottles of glucose. No, I am not drinking your goddamn glucose.
Obviously I have to bring a sack lunch to group events where the “catered meal” turns out to be donuts and cookies. (“Oh but there is a tray with celery on it! You can eat that, right?” No. No I can’t. I can’t keep my blood sugar levels from dropping by eating celery.) And of course I look like a snob at parties (No, sorry, I don’t want the punch. No, no pie for me. No, no cookies. Ah, no, I don’t eat cake. Look, do you have any potatoes?) But these are minor quibbles, and easily dealt with. Certainly compared with Type I Diabetics who must constantly monitor their blood sugar levels and inject insulin, I have nothing to complain about. To be honest, I don’t even think of myself as having a problem, I just think of society as weird.
Step back a moment and look at matters in historical perspective. For about 190,000 years, all humans ate hunter-gatherer diets. About 10,000 years ago, more or less, our ancestors started practicing agriculture and began eating lots of grain. (Hunter-gatherers also ate grain, but not in the same quantities.) Only in the past couple of centuries has refined sugar become widespread, and only in the past few decades have sugars like HFCS become routinely added to regular foods.
Consider fruit juice, which seems natural. It actually takes a fair amount of energy (often mechanized) to squeeze the juice out of an apple. Most of the juice our ancestors drank was fermented, ie, hard cider or wine, which was necessary to keep it from going bad in the days before pasteurization and modern bottling techniques. Fermentation, of course, whether in pickles, yogurt, wine, or bread, transforms natural sugars into acids, alcohols, or gasses (the bubbles in bread.)
In other words, your ancestors probably didn’t drink too many glasses of fresh, unfermented juice. Even modern fruit is probably much sweeter than the fruits our ancestors ate–compare the sugar levels of modern hybrid corns developed in laboratories to their ancestors from the eighteen hundreds, for example. (Yes, I know corn is a “grain” and not a “fruit.” Also, a banana is technically a “berry” but a raspberry is not. It’s a “clusterfruit.” These distinctions are irrelevant to the question of how much fructose is in the plant.)
The apple was first brought to the United States by European settlers seeking freedom in a new world. At first, however, these European cultivars failed to thrive in the American climate, having adapted to environmental conditions an ocean away. They did, however, release seeds, leading to the fertilization and eventual germination of countless new apple breeds. Suddenly, the number of domesticated apples in North America skyrocketed, and the species displayed an amount of genetic diversity that far surpassed that of Europe or other areas of the world (Juniper).
…Traditionally, apple production had been a domestic affair, with most crops being grown on private properties and family orchards. However, a rise in commercial agriculture at the beginning of the twentieth century, the institution of industrial farming practices, and the introduction of electric refrigeration in transportation all impacted the process of growing apples, and these innovations caused the industry to grow. This expansion of commercial apple growing eventually caused apple biodiversity to decline because growers decided to narrow apple production to only a handful of select cultivars based primarily on two key selling factors: sweetness and appearance. In so doing, the thousands of other existing apple varieties, each specialized for a different use, started to become obsolete in the face of more universally accepted varieties, including the infamous Red Delicious, a sugary sweet and visually appealing apple that has become the poster child of the industry (Pollan). …
Rather than rely on natural crossbreeding and pure chance to hopefully create a successful apple variety, growers instead turned to science, and they began implementing breeding practices to develop superior apples that embodied their desired characteristics. … As a result, heirloom and other traditional varieties became all but irrelevant; banished from commercial orchards, they were left to grow in front yards, small local orchards, or in the wild. … Indeed, according to one study, of the 15,000 varieties of apples that were once grown in North America, about eighty percent have vanished (O’Driscoll). It should be noted that a number of these faded because they were originally grown for hard cider, a beverage that fell out of popularity during Prohibition. … Such practices now mean that forty percent of apples sold in grocery markets are a single variety: the Red Delicious (O’Driscoll).
There’s certainly nothing evolutionarily normal about eating ice cream for dinner–your ancestors didn’t even have refrigerators.
So to me, the odd thing isn’t that I can’t eat these strange new foods in large quantities, but that so many other people go ahead and eat them.
Yes, I know they taste good. But like most people, I have normative biases that make me assume that everyone else thinks the same way I do, so I find it weird that “food that makes people feel bad” is so common.
And you might say, “Well, it doesn’t actually make other people feel bad; everyone else can eat these things without trouble,” but last time I checked, society was “suffering an obesity epidemic,” the majority of people were overweight, “metabolic syndrome,” pre-diabetes and Type II Diabetes were rampant, etc., so I really don’t think everyone else can eat these things without trouble. Maybe it’s a different, less immediately noticeable kind of trouble, but it’s trouble nonetheless.
Ultimately, maybe hypoglycemia is a blessing in disguise.
It is easy to romanticize the Gypsies–quaint caravans, jaunty music, and the nomadic lifestyle of the open road all lend themselves to pleasant fantasies. The reality of Gypsy life is much sadder. They are plagued by poverty, illiteracy, violence, the diseases of high consanguinity, and the meddling of outsiders, some better intentioned than others.
I’m going to start off with something which, if true, is rather poetic.
The Gypsies refer to themselves as Rom (or Romani.) I prefer “Gypsy” because I am an American who speaks English and “Gypsy” is the most accepted, well-known ethnonym in American English, but I am also familiar with Rom.
Anyway, there are a couple of other nomadic groups which appear to be related to the Rom, called the Lom and Dom (their langauges, respectively, are Romani, Domari, and Lomavren.) Genetically, these three groups may be the results of different waves of migration from India, where they may have originated from the Domba (or Dom) people.
All four groups speak Indo-European languages. According to Wikipedia:
Its presumed root, ḍom, which is connected with drumming, is linked to damara and damaru, Sanskrit terms for “drum” and the Sanskrit verbal root डम् ḍam- ‘to sound (as a drum)’, perhaps a loan from Dravidian, e.g. Kannadaḍamāra ‘a pair of kettle-drums’, and Teluguṭamaṭama ‘a drum, tomtom‘.
Given the Gypsies’ reputation for musical ability, there is something lovely and poetic about having a name that literally means “Drum.”
Unfortunately, the rest of the picture is not so cheerful.
The new socialist government in postwar Poland aspired to build a nationally and ethnically homogenous state. Although the Gypsies accounted for about .005 percent of the population, “the Gypsy problem” was labeled an “important sate task,” and an Office of Gypsy Affairs was established under the jurisdiction of the Ministry of Internal Affairs–that is, the police. It was in operation until 1989.
In 1952 a broad program to enforce the settlement of Gypsies also came into effect: it was known a the Great Halt … The plan belonged to the feverish fashion for “productivization” which, with its well-intentioned welfare provisions, in fact imposed a new culture of dependency on the Gypsies, who had always opposed it. Similar legislation would be adopted in Czechoslovakia (1958), in Bulgaria (1958), and in Romania (1962), as the vogue for forced assimilation gathered momentum. … by the late 1960s settlement was the goal everywhere. In England and Wales … the 1968 Caravan Sites Act aimed to settle Gypsies (partially by a technique of population control known as “designation” in which whole large areas of the country were declared off-limits to Travelers). …
But no one has ever thought to ask the Gypsies themselves. And accordingly all attempts at assimilation have failed. …
In a revised edition of his great book The Gypsies in Poland, published in 1984, Ficowski reviews the results of the Big Halt campaign. “Gypsies no longer lead a nomadic life, and the number of illiterates has considerably fallen.” But even these gains were limited because Gypsy girls marry at the age of twelve or thirteen, and because “in the very few cases where individuals are properly educated, they usually tend to leave the Gypsy community.” The results were disastrous: “Opposition to the traveling of the Gypsy craftsmen, who had taken their tinsmithing or blacksmithing into the uttermost corner of the country, began gradually to bring about the disappearance of … most of the traditional Gypsy skills.” And finally, “after the loss of opportunities to practice traditional professions, [for many Gypsies] the main source of livelihood became preying on the rest of society.” Now there really was something to be nostalgic about. Wisdom comes too late. The owl of Minerva flies at dusk.
That a crude demographic experiment ended in rootlessness and squalor is neither surprising nor disputed… “
Of course, Gypsy life was not so great before settlement, either. Concentrations of poverty in the middle of cities may be much easier to measure and deplore than half-invisible migrant people on the margins of society, but no one appreciates being rounded up and forced into ghettos.
I am reminded here of all of the similar American attempts, from Pruit Igoe to Cabrini Green. Perhaps people had good intentions upon building these places. New, clean, cheap housing. A community of people like oneself, in the heart of a thriving city.
And yet they’ve all failed pretty miserably.
On the other hand, the Guardian reports on violence (particularly domestic) in Gypsy communities in Britain:
..a study in Wrexham, cited in a paper by the Equality and Human Rights Commission, 2007, found that 61% of married English Gypsy women and 81% of Irish Travellers had experienced domestic abuse.
The Irish Travellers are ethnically Irish, not Gypsy, but lead similarly nomadic lives.
“I left him and went back to my mammy but he kept finding me, taking me home and getting me pregnant,” Kathleen says. She now feels safe because she has male family members living on the same site. “With my brother close by, he wouldn’t dare come here.” …
But domestic violence is just one of the issues tackled by O’Roarke during her visits. The welfare needs, particularly those of the women and girls, of this community are vast. The women are three times more likely to miscarry or have a still-born child compared to the rest of the population, mainly, it is thought, as a result of reluctance to undergo routine gynaecological care, and infections linked to poor sanitation and lack of clean water. The rate of suicides among Traveller women is significantly higher than in the general population, and life expectancy is low for women and men, with one third of Travellers dying before the age of 59. And as many Traveller girls are taken out of education prior to secondary school to prevent them mixing with boys from other cultures, illiteracy rates are high. …
Things seem set to get worse for Traveller women. Only 19 days after the general election last year, £50m that had been allocated to building new sites across London was scrapped from the budget. O’Roarke is expecting to be the only Traveller liaison worker in the capital before long – her funding comes from the Irish government.
“Most of the women can’t read or write. Who is supposed to help them if they get rid of the bit of support they have now?” asks O’Roarke. “We will be seeing Traveller women and their children on the streets because of these cuts. If they get a letter saying they are in danger of eviction but they can’t read it, what are they supposed to do?”
Welfare state logic is painful. Obviously Britain is a modern, first-world country with a free education system in which any child, male or female, can learn to read (unless they are severely low-IQ.) If Gypsies and Travellers want to preserve their cultures with some modicum of dignity, then they must read, because literacy is necessary in the modern economy. Forced assimilation or not, no one really needs traditional peripatetic tin and blacksmiths anymore. Industrialization has eliminated such jobs.
Kathleen, after spending time in a refuge after finally managing to escape her husband, was initially allocated a house, as opposed to a plot on a [trailer] site. Almost immediately her children became depressed. “It’s like putting a horse in a box. He would buck to get out,” says Kathleen. “We can’t live in houses; we need freedom and fresh air. I was on anti-depressives. The children couldn’t go out because the neighbours would complain about the noise.”
Now this I am more sympathetic to. While I dislike traveling, largely because my kids always get carsick, I understand that plenty of people actually like being nomadic. Indeed, I wouldn’t be surprised if some people were genetically inclined to be outside, to move, to be constantly on the road, while others were genetically inclined to settle down in one place. To try to force either person into a lifestyle contrary to their own nature would be cruel.
Medical data on 58 Gypsies in the area of Boston, Massachusetts, were analysed together with a pedigree linking 39 of them in a large extended kindred. Hypertension was found in 73%, diabetes in 46%, hypertriglyceridaemia in 80%, hypercholesterolaemia in 67%, occlusive vascular disease in 39%, and chronic renal insufficiency in 20%. 86% smoked cigarettes and 84% were obese. Thirteen of twenty-one marriages were consanguineous, yielding an inbreeding coefficient of 0.017. The analysis suggests that both heredity and environment influence the striking pattern of vascular disease in American Gypsies.
Although far from systematic, the published information indicates that medical genetics has an important role to play in improving the health of this underprivileged and forgotten people of Europe. Reported carrier rates for some Mendelian disorders are in the range of 5 -15%, sufficient to justify newborn screening and early treatment, or community-based education and carrier testing programs for disorders where no therapy is currently available. …
Reported gene frequencies are high for both private and “imported” mutations, and often exceed by an order of magnitude those for global populations. For example, galactokinase deficiency whose worldwide frequency is 1:150,000 to 1:1,000,000 [56,57] affects 1 in 5,000 Romani children ; autosomal dominant polycystic kidney disease (ADPKD) has a global prevalence of 1:1000 individuals worldwide  and 1:40 among the Roma in some parts of Hungary ; primary congenital glaucoma ranges between 1:5,000 and 1:22,000 worldwide [59,60] and about 1:400 among the Roma in Central Slovakia [61,62].
Carrier rates for a number of disorders have been estimated to be in the 5 to 20% range (Table 3). …
Historical demographic data are limited, however tax registries and census data give an approximate idea of population size and rate of demographic growth through the centuries (Table 4). A small size of the original population is suggested by the fact that although most of the migrants arriving in Europe in the 11th-12th century remained within the limits of the Ottoman Empire [1,75], the overall number of Roma in its Balkan provinces in the 15th century was estimated at only 17,000. …
During its subsequent history in Europe, this founder population split into numerous socially divided and geographically dispersed endogamous groups, with historical records from different parts of the continent consistently describing the travelling Gypsies as “a group of 30 to 100 people led by an elder” [1,2]. These splits, a possible compound product of the ancestral tradition of the jatis of India, and the new social pressures in Europe (e.g. Gypsy slavery in Romania  and repressive legislation banning Gypsies from most western European countries [1,2]), can be regarded as secondary bottlenecks, reducing further the number of unrelated founders in each group. The historical formation of the present-day 8 million Romani population of Europe is therefore the product of the complex initial migrations of numerous small groups, superimposed on which are two large waves of recent migrations from the Balkans into Western Europe, in the 19th – early 20th century, after the abolition of slavery in Rumania [1,2,76] and over the last decade, after the political changes in Eastern Europe [7,8]. …
Individual groups can be classified into major metagroups [1,2,75]: the Roma of East European extraction; the Sinti in Germany and Manouches in France and Catalonia; the Kaló in Spain, Ciganos in Portugal and Gitans of southern France; and the Romanichals of Britain . The greatest diversity is found in the Balkans, where numerous groups with well defined social boundaries exist. The 700-800,000 Roma in Bulgaria belong to three metagroups, comprising a large number of smaller groups .
the actual cousin marriage rates vary though from (as you’ll see below) ca. 10-30% first cousin only marriages amongst gypsies in slovakia to 29% first+second cousin marriages amongst gypsies in spain [pdf] to 36% first+second cousin marriages amongst gypsies in wales [pdf]. these rates are comparable to those found in places like turkey (esp. eastern turkey) or north africa…or southern india.
I’m not quoting the whole thing for you; you’ll just have to go read the whole thing yourself.
The 1987 national study of Travellers’ health status in Ireland11 reported a high death rate for all causes and lower life expectancy for Irish Travellers: women 11.9 years and men 9.9 years lower than the non‐Traveller population. Our pilot study of 87 Gypsies and Travellers matched for age and sex with indigenous working class residents in a socially deprived area of Sheffield,12 reported statistically and clinically significant differences between Gypsies and Travellers and their non‐Gypsy comparators in some aspects of health status, and significant associations with smoking and with frequency of travelling. The report of the Confidential Enquiries into Maternal Deaths in the UK, 1997–1999, found that Gypsies and Travellers have “possibly the highest maternal death rate among all ethnic groups”.13
This is all kind of depressing, but I have a thought: if Gypsies want to preserve their culture and improve their lives, perhaps the disease burden may be lessened and IQs raised by encouraging young Gypsy men and women to find partners in other Gypsy groups from other countries instead of from within their own kin groups.
Further evidence for the South Asian origin of the Romanies came in the late 1990s. Researchers doing DNA analysis discovered that Romani populations carried large frequencies of particular Y chromosomes (inherited paternally) and mitochondrial DNA (inherited maternally) that otherwise exist only in populations from South Asia.
47.3% of Romani men carry Y chromosomes of haplogroup H-M82 which is rare outside South Asia. Mitochondrial haplogroup M, most common in Indian subjects and rare outside Southern Asia, accounts for nearly 30% of Romani people. A more detailed study of Polish Roma shows this to be of the M5 lineage, which is specific to India. Moreover, a form of the inherited disorder congenital myasthenia is found in Romani subjects. This form of the disorder, caused by the 1267delG mutation, is otherwise known only in subjects of Indian ancestry. This is considered to be the best evidence of the Indian ancestry of the Romanis.
I must stop here and note that I have painted a largely depressing picture. It is not the picture I want to paint. I would like to paint a picture of hope and triumph. Certainly there are many talented, hard-working, kind, decent, and wonderful Gypsies. I hope the best for them, and a brighter future for their children.
Much of evolutionary literature focuses on the straightforward relationship between predator and prey, or on competition between members of the same species for limited resources, mates, etc.
But today we’re going to focus on fraud.
Red touch yellow, kill a fellow. Red touch black, friend to Jack.
The Coral snake is deadly poisonous. (Or venomous, as they say.) The Milk snake is harmless, but by mimicking the coral’s red, black, and yellow bands, it tricks potential predators into believing that it, too, will kill them.
The milk snake is a fraud, benefiting from the coral’s venom without producing any of its own.
Nature has many frauds, from the famously brood-parasitic Cuckoos to the nightmare-fuel snail eyestalk-infecting flatworms, to the fascinating mimic octopus, who can change the colors and patterns on its skin in the blink of an eye.
But just as predator and prey evolve in tandem, the prey developing new strategies to outwit predators, and predators in turn developing new strategies to defeat the prey’s new strategies. So also with fraud; animals who detect frauds out-compete those who are successfully deceived.
Complex human systems depend enormously on trust–and thus are prime breeding grounds for fraud.
Let’s take the job market. Employers want to hire the best employees possible (at the lowest possible prices, of course.) So employers do their best to (efficiently) screen potential candidates for work-related qualities like diligence, honesty, intelligence, and competency.
Employees want to eat. Diligence, honesty, years spent learning how to do a particular job, etc., are not valued because they help the company, but because they result in eating (and, if you’re lucky, reproduction.)
When there are far more employees competing against each other for jobs than there are openings, not only do employers have a chance to ratchet up the qualifications they demand in applicants, they pretty much have to. No employer trying to fill a single position has time to read 10,000 resumes, nor would it be in their interest to do so. So employers come up with requirements–often totally arbitrary–to automatically cut down the number of applications.
“Must have 3-5 years work experience” = people with 6 years of experience automatically rejected.
“Must be currently employed with no gaps in resume” = no one who took time off to have children. (This is one of the reasons birthrates are so low.)
“Must have X degree” = person with 15 years experience in the field but no degree automatically rejected.
The result, of course, is that prospective employees begin lying, cheating, or finding other deceptive ways to trick employers into reading their resumes. Workers with 6 years of experience put down 5. Workers with 2 record 3. People who can’t get into American medical schools attend Caribbean ones. “Brought donuts to the meeting” is inflated to “facilitated cross-discipline network conversation.” Whites who believe employers are practicing AA tickybox “black” on their applications. And as more and more jobs that formerly required nothing more than graduating college start requiring college degrees, more and more colleges start offering bullshit degrees so that everyone can get one.
The higher the competition and more arbitrary the rules, the higher the incentives for cheating.
It began with a test-fixing scandal so massive that it led to 2,000 arrests, including top politicians, academics and doctors. Then suspects started turning up dead. What is the truth behind the Vyapam scam that has gripped India? …
For at least five years, thousands of young men and women had paid bribes worth millions of pounds in total to a network of fixers and political operatives to rig the official examinations run by the Madhya Pradesh Vyavsayik Pariksha Mandal – known as Vyapam – a state body that conducted standardised tests for thousands of highly coveted government jobs and admissions to state-run medical colleges. When the scandal first came to light in 2013, it threatened to paralyse the entire machinery of the state administration: thousands of jobs appeared to have been obtained by fraudulent means, medical schools were tainted by the spectre of corrupt admissions, and dozens of officials were implicated in helping friends and relatives to cheat the exams. …
The list of top state officials placed under arrest reads like the telephone directory of the Madhya Pradesh secretariat. The most senior minister in the state government, Laxmikant Sharma – who had held the health, education and mining portfolios – was jailed, and remains in custody, along with his former aide, Sudhir Sharma, a former schoolteacher who parlayed his political connections into a vast mining fortune.
One of the things I find amusing (and, occasionally, frustrating) about Americans is that many of us are still so trusting. What we call “corruption”–what we imagine as an infection in an otherwise healthy entity–is the completely normal way of doing business throughout most of the world. (I still run into people who are surprised to discover that there are a lot of scams being run out of Nigeria. Nigerian scammers? Really? You don’t say.)
It’s good to get out of your bubble once in a while. Go hang out on international forums with people from the third world, and listen in on some of the conversations between Indians and Pakistanis or Indians and Chinese. Chinese and Indians constantly accuse each other’s countries of engaging in massive educational cheating.
Maybe they know something we don’t.
People want jobs because jobs mean eating; a good job means good eating, ergo every family worth its salt wants their children to get good jobs. But in a nation with 1.2 billion people and only a few good jobs, competition is ferocious:
In 2013, the year the scam was first revealed, two million young people in Madhya Pradesh – a state the size of Poland, with a population greater than the UK – sat for 27 different examinations conducted by Vyapam. Many of these exams are intensely competitive. In 2013, the prestigious Pre-Medical Test (PMT), which determines admission to medical school, had 40,086 applicants competing for just 1,659 seats; the unfortunately named Drug Inspector Recruitment Test (DIRT), had 9,982 candidates striving for 16 vacancies in the state department of public health.
For most applicants, the likelihood of attaining even low-ranking government jobs, with their promise of long-term employment and state pensions, is incredibly remote. In 2013, almost 450,000 young men and women took the exam to become one of the 7,276 police constables recruited that year – a post with a starting salary of 9,300 rupees (£91) per month. Another 270,000 appeared for the recruitment examination to fill slightly more than 2,000 positions at the lowest rank in the state forest service.
Since no one wants to spend their life picking up trash or doing back-breaking manual labor in the hot sun, the obvious solution is to cheat:
The impersonators led the police to Jagdish Sagar, a crooked Indore doctor who had set up a lucrative business that charged up to 200,000 rupees (£2,000) to arrange for intelligent but financially needy medical students to sit examinations on behalf of applicants who could afford to pay.
The families of dumb kids pay for smart kids to take tests for them.
In 2009, police claim, Sagar and Mohindra [Vypam’s systems analyst/data entry guy] had a meeting in Sagar’s car in Bhopal’s New Market bazaar, where the doctor made an unusual proposition: he would give Mohindra the application forms of groups of test-takers, and Mohindra would alter their “roll numbers” to ensure they were seated together so they could cheat from each other. According to Mohindra’s statement to the police, Sagar “offered to pay me 25,000 rupees (£250) for each roll number I changed.”
This came to be known as the “engine-bogie” system. The “engine” would be one of Sagar’s impostors – a bright student from a medical college, taking the exam on behalf of a paying customer – who would also pull along the lower-paying clients sitting next to him by supplying them with answers. … From 2009 to 2013, the police claim, Mohindra tampered with seating assignment for at least 737 of Sagar’s clients taking the state medical exam. …
Mohinda also began just straight-up filling in the bubbles and altering exam scores in the computer for rich kids whose parents had paid him off.
Over the course of only two years, police allege, Mohindra and Trivedi conspired to fix the results of 13 different examinations – for doctors, food inspectors, transport constables, police constables and police sub-inspectors, two different kinds of school teachers, dairy supply officers and forest guards – which had been taken by a total of 3.2 million students.
Remember this if you ever travel to India.
But merely uncovering the scam does not make it go away; witnesses begin dying:
In July 2014, the dean of a medical college in Jabalpur, Madhya Pradesh, Dr SK Sakalle – who was not implicated in the scandal, but had reportedly investigated fraudulent medical admissions and expelled students accused of obtaining their seats by cheating – was found burned to death on the front lawn of his own home. …
In an interview with the Hindustan Times earlier this year, a policeman, whose own son was accused in the scam and died in a road accident, advanced an unlikely yet tantalising theory. He argued that the Vyapam taskforce – under pressure to conduct a credible probe that nevertheless absolved top government officials – had falsely named suspects who were already deceased in order to shield the real culprits.
A competing theory, voiced by journalists covering the scandal in Bhopal, proposes that it will be all but impossible to determine whether the deaths are connected to Vyapam, because the families of many of the dead refuse to admit that their children paid money to cheat on their exams – for fear that the police might arrest the bereaved parents as well.
For India’s poor (and middle class,) scamming is a dammed if you do, dammed if you don’t affair:
“My brother was arrested four months ago for paying someone to ensure he cleared the police constable exam in 2012,” the man told me. “Some people in our village said, ‘This is Madhya Pradesh, nothing happens without money.’ My brother sold his land and paid them 600,000 rupees.”
In August that year, he was one of 403,253 people who appeared for the recruitment test to become a police constable. … Four months after his marriage, his name popped up in the scam, he lost his job and he was hauled off to prison.
“So now my brother has a wife and his first child, but no job, no land, no money, no prospects and a court case to fight,” the man said. “You can write your story, but write that this is a state of 75 million corrupt people, where there is nothing in the villages and if a man comes to the city in search of an honest day’s work, the politicians and their touts demand money and then throw him into jail for paying.”
I would like to note that in many of these cases, the little guys in the scam, while arguably acting dishonestly and cheating against their neighbors, are basically well-intentioned people who don’t see any other options besides bribing their way into jobs. In the end, these guys often get screwed (or end up dead.)
It’s the people who are taking the bribes and fixing the tests and creating bullshit degrees and profiting off people’s houses burning down who are getting rich off everyone else and ensuring that cheating is the only way to get ahead.
These people are parasites.
Parasitism increases complexity in the host organism, which increases complexity in the parasite in turn:
With selection, evolution can also produce more complex organisms. Complexity often arises in the co-evolution of hosts and pathogens, with each side developing ever more sophisticated adaptations, such as the immune system and the many techniques pathogens have developed to evade it. For example, the parasite Trypanosoma brucei, which causes sleeping sickness, has evolved so many copies of its major surface antigen that about 10% of its genome is devoted to different versions of this one gene. This tremendous complexity allows the parasite to constantly change its surface and thus evade the immune system through antigenic variation.
Animals detect and expel parasites; parasites adapt to avoid detection.
So, too, with human scams.
We tend to increase complexity by adding paperwork.
A few people cheat on their taxes, so the IRS begins randomly auditing people to make sure everyone is complying. A few people refuse to hire African Americans, so companies must keep records on the ethnic/racial identities of everyone they interview for a job. An apartment complex fears it could get sued if a car hits a bicyclist in the parking lot, so it forbids all of the children there from riding their bikes. A college gets sued after a mentally ill student commits suicide on campus, so the college starts expelling all mentally ill students.
Now, while I appreciate certain kinds of complexity (like the sort that results in me having a computer to write on and an internet to post on,) the variety that arises due to a constant war between parasites and prey doesn’t seem to have much in the way of useful side effects. Perhaps I am missing something, but it does not seems like increasing layers of oversight and bureaucracy in an attempt to cut down cheating makes the world any better–rather the opposite, in fact.
Interestingly, fevers are not diseases nor even directly caused by disease, but by your own immune system responding to disease. By increasing your internal temperature, your body aims to kill off the infection or at least make things too inhospitable for it to breed. Fevers (within a moderate range) are your friends.
They are still unpleasant and have a seriously negative effect on your ability to get anything else done.
An ill patient can do little more than lie in bed and hope for recovery; a sick society does nothing but paperwork.
Certainly the correct response to parasitism is to root it out–paperwork, fever, and all. But the long-term response should focus on restructuring institutions so they don’t become infected in the first place.
In human systems, interdependence in close-knit communities is probably the most reliable guard against fraud. You are unlikely to prosper by cheating your brother (genetically, after all, his success is also half your success,) and people who interact with you often will notice if you do not treat them fairly.
Tribal societies have plenty of problems, but at least you know everyone you’re dealing with.
Modern society, by contrast, forces people to interact with and dependent upon thousands of people they don’t know, many they’ve met only once and far more they’ve never met at all. When I sit down to dinner, I must simply trust that the food I bought at the grocery store is clean, healthy, and unadultarated; that no one has contaminated the milk, shoved downer cows into the chute, or failed to properly wash the tomatoes. When I drive I depend on other drivers to not be drunk or impaired, and upon the city to properly maintain the roads and direct traffic. When I apply for jobs I hope employers will actually read my resume and not just hire the boss’s nephew; when I go for a walk in the park, I hope that no one will mug me.
With so many anonymous or near-anonymous interactions, it is very easy for people to defraud others and then slip away, never to be seen again. A mugger melts into a crowd; the neighbor whose dog shat all over your yard moves and disappears. Twitter mobs strike out of the blue and then disperse.
So how do we get, successfully, from tight-knit tribes to million+ people societies with open markets?
How do modern countries exist at all?
I suspect that religion–Christianity in the West, probably others elsewhere–has played a major role in encouraging everyone to cooperate with their neighbors by threatening them with eternal damnation if they don’t.
6 Do not take a pair of millstones—not even the upper one—as security for a debt, because that would be taking a person’s livelihood as security.
7 If someone is caught kidnapping a fellow Israelite and treating or selling them as a slave, the kidnapper must die. You must purge the evil from among you. …
10 When you make a loan of any kind to your neighbor, do not go into their house to get what is offered to you as a pledge.11 Stay outside and let the neighbor to whom you are making the loan bring the pledge out to you.12 If the neighbor is poor, do not go to sleep with their pledge in your possession.13 Return their cloak by sunset so that your neighbor may sleep in it. Then they will thank you, and it will be regarded as a righteous act in the sight of the Lord your God.
14 Do not take advantage of a hired worker who is poor and needy, whether that worker is a fellow Israelite or a foreigner residing in one of your towns.15 Pay them their wages each day before sunset, because they are poor and are counting on it. Otherwise they may cry to the Lord against you, and you will be guilty of sin. …
17 Do not deprive the foreigner or the fatherless of justice, or take the cloak of the widow as a pledge.18 Remember that you were slaves in Egypt and the Lord your God redeemed you from there. That is why I command you to do this.
To be fair, we have to credit Judaism for Deuteronomy.
Here we have organized religion attempting to bridge the gap between tribalism and universal morality. Enslaving one of your own is an offense punishable by death, but there is no command to rescue the enslaved of other nations. You must treat your own employees well, whether they come from your own tribe or other tribes.
In tribal societies, justice is run through the tribe. People with no families or clans–like orphans and foreigners–therefore cannot access the normal routes to justice.
The new barbarian rulers also disliked the death penalty, but for different reasons. There was a strong feeling that every adult male had a right to use violence and to kill, if need be. This right was of course reciprocal. If you killed a man, his death could be avenged by his brothers and other male kinsmen. The prospect of a vendetta thus created a ‘balance of terror’ that kept violence within limits. So, initially, the barbarians allowed capital punishment only for treason, desertion, and cowardice in combat (Carbasse, 2011, p. 35). [bold mine]
[The Salic Law] is a pact (pactus) “concluded between the Franks and their chiefs,” for the specific purpose of ensuring peace among the people by “cutting short the development of brawls.” This term evidently means private acts of vengeance, the traditional vendettas that went on from generation to generation. In place of the vengeance henceforth forbidden, the law obliged the guilty party to pay the victim (or, in the case of murder, his family) compensation. This was an indemnity whose amount was very precisely set by the law, which described with much detail all of the possible damages, this being to avoid any discussion between the parties and make [murder] settlements as rapid, easy, and peaceful as possible. […] This amount was called the wergild, the “price of a man.” The victim’s family could not refuse the wergild, and once it was paid, the family had to be satisfied. They no longer had the right to avenge themselves (Carbasse, 2011, pp. 33-34).
This situation began to change in the 12th century. One reason was that the State had become stronger. But there also had been an ideological change. The State no longer saw itself as an honest broker for violent disputes that did not challenge its existence. Jurists were now arguing that the king must punish the wicked to ensure that the good may live in peace.
In a tribal system, a victim with no family has no one to bring a suit on their behalf, if they are murdered, there is no one to pay weregild to. This leaves orphans and “foreigners” without any access to justice.
Thus Deuteronomy’s command not to mistreat them (or widows.) They aren’t protected under tribal law, but they are under Yahweh’s.
The threat of divine punishment (and promise of rewards for good behavior,) may have encouraged early Christians to cooperate with strangers. People who would cheat others now have both their own consciences and the moral standards of their Christian neighbors to answer to. The ability to do business with people outside of one’s own family or clan without constant fear of getting ripped off is a necessary prerequisite for the development of free markets, modern economies, and million+ nations. (In short, universalism.)
In the absence of universalist societies that effectively discourage cheating, groups that protect their own will out-compete groups that do not. The Amish, for example, have grown from 5,000 to 300,000 people over the past century (despite significant numbers of Amish children choosing to leave the society every generation.)
(By contrast, my own family has largely failed to reproduce itself–my cousins are all childless, and I have no second cousins.)
The Amish avoid outsiders, keeping their wealth within their own communities. This probably also allows them to steer clear of cheaters and scammers (unlike everyone who lost money in the 2008 housing crash or the 2001 stock market crash.) As insular groups go, the Amish don’t seem too bad–I haven’t heard any reports of them stealing people’s chickens or scamming elderly widows out of their life’s savings.
But humans are not mere action-reaction systems; they have qualia, an inner experience of being.
One of my themes here is the idea that various psychological traits, like anxiety, guilt, depression, or disgust, might not be just random things we feel, but exist for evolutionary reasons. Each of these emotions, when experienced moderately, may have beneficial effects. Guilt (and its cousin, shame,) helps us maintain our social relationships with other people, aiding in the maintenance of large societies. Disgust protects us from disease and helps direct sexual interest at one’s spouse, rather than random people. Anxiety helps people pay attention to crucial, important details, and mild depression may help people concentrate, stay out of trouble, or–very speculatively–have helped our ancestors hibernate during the winter.
In excess, each of these traits is damaging, but a shortage of each trait may also be harmful.
I have commented before on the remarkable statistic that 25% of women are on anti-depressants, and if we exclude women over 60 (and below 20,) the number of women with an “anxiety disorder” jumps over 30%.
The idea that a full quarter of us are actually mentally ill is simply staggering. I see three potential causes for the statistic:
Doctors prescribe anti-depressants willy-nilly to everyone who asks, whether they’re actually depressed or not;
Something about modern life is making people especially depressed and anxious;
Mental illnesses are side effects of common, beneficial conditions (similar to how sickle cell anemia is a side effect of protection from malaria.)
As you probably already know, sickle cell anemia is a genetic mutation that protects carriers from malaria. Imagine a population where 100% of people are sickle cell carriers–that is, they have one mutated gene, and one regular gene. The next generation in this population will be roughly 25% people who have two regular genes (and so die of malaria,) 50% of people who have one sickle cell and one regular gene (and so are protected,) and 25% of people will have two sickle cell genes and so die of sickle cell anemia. (I’m sure this is a very simplified scenario.)
So I consider it technically possible for 25% of people to suffer a pathological genetic condition, but unlikely–malaria is a particularly ruthless killer compared to being too cheerful.
Skipping to the point, I think there’s a little of all three going on. Each of us probably has some kind of personality “set point” that is basically determined by some combination of genetics, environmental assaults, and childhood experiences. People deviate from their set points due to random stuff that happens in their lives, (job promotions, visits from friends, car accidents, etc.,) but the way they respond to adversity and the mood they tend to return to afterwards is largely determined by their “set point.” This is all a fancy way of saying that people have personalities.
The influence of random chance on these genetic/environmental factors suggests that there should be variation in people’s emotional set points–we should see that some people are more prone to anxiety, some less prone, and some of average anxiousness.
Please note that this is a statistical should, in the same sense that, “If people are exposed to asbestos, some of them should get cancer,” not a moral should, as in, “If someone gives you a gift, you should send a thank-you note.”
Natural variation in a trait does not automatically imply pathology, but being more anxious or depressive or guilt-ridden than others can be highly unpleasant. I see nothing wrong, a priori, with people doing things that make their lives more pleasant and manageable (and don’t hurt others); this is, after all, why I enjoy a cup of coffee every morning. If you are a better, happier, more productive person with medication (or without it,) then carry on; this post is not intended as a critique of anyone’s personal mental health management, nor a suggestion for how to take care of your mental health.
Our medical/psychological health system, however, operates on the assumption that medications are for pathologies only. There is not form to fill out that says, “Patient would like anti-anxiety drugs in order to live a fuller, more productive life.”
That said, all of these emotions are obviously responses to actual stuff that happens in real life, and if 25% of women are coming down with depression or anxiety disorders, I think we should critically examine whether anxiety and depression are really the disease we need to be treating, or the body’s responses to some external threat.
In a mixed group, women become quieter, less assertive, and more compliant. This deference is shown only to men and not to other women in the group. A related phenomenon is the sex gap in self-esteem: women tend to feel less self-esteem in all social settings. The gap begins at puberty and is greatest in the 15-18 age range (Hopcroft, 2009).
If more women enter the workforce–either because they think they ought to or because circumstances force them to–and the workforce triggers depression, then as the percent of women formally employed goes up, we should see a parallel rise in mental illness rates among women. Just as Adderal and Ritalin help little boys conform to the requirements of modern classrooms, Prozac and Lithium help women cope with the stress of employment.
As we discussed yesterday, fever is not a disease, but part of your body’s system for re-asserting homeostasis by killing disease microbes and making it more difficult for them to reproduce. Extreme fevers are an over-reaction and can kill you, but a normal fever below 104 degrees or so is merely unpleasant and should be allowed to do its work of making you better. Treating a normal fever (trying to lower it) interferes with the body’s ability to fight the disease and results in longer sicknesses.
Likewise, these sorts of emotions, while definitely unpleasant, may serve some real purpose.
We humans are social beings (and political animals.) We do not exist on our own; historically, loneliness was not merely unpleasant, but a death sentence. Humans everywhere live in communities and depend on each other for survival. Without refrigeration or modern storage methods, saving food was difficult. (Unless you were an Eskimo.) If you managed to kill a deer while on your own, chances are you couldn’t eat it all before it began to rot, and then your chances of killing another deer before you started getting seriously hungry were low. But if you share your deer with your tribesmates, none of the deer goes to waste, and if they share their deer with yours, you are far less likely to go hungry.
If you end up alienated from the rest of your tribe, there’s a good chance you’ll die. It doesn’t matter if they were wrong and you were right; it doesn’t matter if they were jerks and you were the nicest person ever. If you can’t depend on them for food (and mates!) you’re dead. This is when your emotions kick in.
People complain a lot that emotions are irrational. Yes, they are. They’re probably supposed to be. There is nothing “logical” or “rational” about feeling bad because someone is mad at you over something they did wrong! And yet it happens. Not because it is logical, but because being part of the tribe is more important than who did what to whom. Your emotions exist to keep you alive, not to prove rightness or wrongness.
This is, of course, an oversimplification. Men and women have been subject to different evolutionary pressures, for example. But this is close enough for the purposes of the current conversation.
If modern people are coming down with mental illnesses at astonishing rates, then maybe there is something about modern life that is making people ill. If so, treating the symptoms may make life more bearable for people while they are subject to the disease, but still does not fundamentally address whatever it is that is making them sick in the first place.
It is my own opinion that modern life is pathological, not (in most cases,) people’s reactions to it. Modern life is pathological because it is new and therefore you aren’t adapted to it. Your ancestors have probably only lived in cities of millions of people for a few generations at most (chances are good that at least one of your great-grandparents was a farmer, if not all of them.) Naturescapes are calming and peaceful; cities noisy, crowded, and full of pollution. There is some reason why schizophrenics are found in cities and not on farms. This doesn’t mean that we should just throw out cities, but it does mean we should be thoughtful about them and their effects.
People seem to do best, emotionally, when they have the support of their kin, some degree of ethnic or national pride, economic and physical security, attend religious services, and avoid crowded cities. (Here I am, an atheist, recommending church for people.) The knowledge you are at peace with your tribe and your tribe has your back seems almost entirely absent from most people’s modern lives; instead, people are increasingly pushed into environments where they have no tribe and most people they encounter in daily life have no connection to them. Indeed, tribalism and city living don’t seem to get along very well.
To return to healthy lives, we may need to re-think the details of modernity.
Philosophically and politically, I am a great believer in moderation and virtue as the ethical, conscious application of homeostatic systems to the self and to organizations that exist for the sake of humans. Please understand that this is not moderation in the conventional sense of “sometimes I like the Republicans and sometimes I like the Democrats,” but the self-moderation necessary for bodily homeostasis reflected at the social/organizational/national level.
For example, I have posted a bit on the dangers of mass immigration, but this is not a call to close the borders and allow no one in. Rather, I suspect that there is an optimal amount–and kind–of immigration that benefits a community (and this optimal quantity will depend on various features of the community itself, like size and resources.) Thus, each community should aim for its optimal level. But since virtually no one–certainly no one in a position of influence–advocates for zero immigration, I don’t devote much time to writing against it; it is only mass immigration that is getting pushed on us, and thus mass immigration that I respond to.
Similarly, there is probably an optimal level of communal genetic diversity. Too low, and inbreeding results. Too high, and fetuses miscarry due to incompatible genes. (Rh- mothers have difficulty carrying Rh+ fetuses, for example, because their immune systems identify the fetus’s blood as foreign and therefore attack it, killing the fetus.) As in agriculture, monocultures are at great risk of getting wiped out by disease; genetic heterogeneity helps ensure that some members of a population can survive a plague. Homogeneity helps people get along with their neighbors, but too much may lead to everyone thinking through problems in similar ways. New ideas and novel ways of attacking problems often come from people who are outliers in some way, including genetics.
There is a lot of talk ’round these parts that basically blames all the crimes of modern civilization on females. Obviously I have a certain bias against such arguments–I of course prefer to believe that women are superbly competent at all things, though I do not wish to stake the functioning of civilization on that assumption. If women are good at math, they will do math; if they are good at leading, they will lead. A society that tries to force women into professions they are not inclined to is out of kilter; likewise, so is a society where women are forced out of fields they are good at. Ultimately, I care about my doctor’s competence, not their gender.
In a properly balanced society, male and female personalities complement each other, contributing to the group’s long-term survival.
Women are not accidents of nature; they are as they are because their personalities succeeded where women with different personalities did not. Women have a strong urge to be compassionate and nurturing toward others, maintain social relations, and care for those in need of help. These instincts have, for thousands of years, helped keep their families alive.
When the masculine element becomes too strong, society becomes too aggressive. Crime goes up; unwinable wars are waged; people are left to die. When the feminine element becomes too strong, society becomes too passive; invasions go unresisted; welfare spending becomes unsustainable. Society can’t solve this problem by continuing to give both sides everything they want, (this is likely to be economically disastrous,) but must actually find a way to direct them and curb their excesses.
I remember an article on the now-defunct neuropolitics (now that I think of it, the Wayback Machine probably has it somewhere,) on an experiment where groups with varying numbers of ‘liberals” and “conservatives” had to work together to accomplish tasks. The “conservatives” tended to solve their problems by creating hierarchies that organized their labor, with the leader/s giving everyone specific tasks. The “liberals” solved their problems by incorporating new members until they had enough people to solve specific tasks. The groups that performed best, overall, were those that had a mix of ideologies, allowing them to both make hierarchical structures to organize their labor and incorporate new members when needed. I don’t remember much else of the article, nor did I read the original study, so I don’t know what exactly the tasks were, or how reliable this study really was, but the basic idea of it is appealing: organize when necessary; form alliances when necessary. A good leader recognizes the skills of different people in their group and uses their authority to direct the best use of these skills.
Our current society greatly lacks in this kind of coherent, organizing direction. Most communities have very little in the way of leadership–moral, spiritual, philosophical, or material–and our society seems constantly intent on attacking and tearing down any kind of hierarchies, even those based on pure skill and competence. Likewise, much of what passes for “leadership” is people demanding that you do what they say, not demonstrating any kind of competence. But when we do find competent leaders, we would do well to let them lead.