I suspect nature is constrained by basic physics/chemistry/thermodynamics in a variety of interesting ways.
For example, chemical reactions (and thus biological processes) proceed more quickly when they are warm than cold–this is pretty much a tautology, since temperature=movement–and thus it seems reasonable to expect certain biological processes to proceed more slowly in colder places/seasons than in warmer ones.
Temperature is a basic and essential property of any physical system, including living systems. Even modest variations in temperature can have profound effects on organisms, and it has long been thought that as metabolism increases at higher temperatures so should rates of ageing. Here, we review the literature on how temperature affects longevity, ageing and life history traits. From poikilotherms to homeotherms, there is a clear trend for lower temperature being associated with longer lifespans both in wild populations and in laboratory conditions. Many life-extending manipulations in rodents, such as caloric restriction, also decrease core body temperature.
This implies, in turn, that people (or animals) who overeat will tend to die younger, not necessarily due to any particular effects of having extra lumps of fat around, but because they burn hotter and thus faster.
Weighing more may trigger certain physiological changes–like menarchy–to begin earlier due to the beneficial presence of fat–you don’t want to menstruate if you don’t have at least a little weight to spare–which may in turn speed up certain other parts of aging, but there could be an additional effect on aging just from the presence of more cells in the body, each requiring additional metabolic processes to maintain.
Observational study of 8,003 American men of Japanese ancestry from the Honolulu Heart Program/Honolulu-Asia Aging Study (HHP/HAAS), a genetically and culturally homogeneous cohort followed for over 40 years. …
A positive association was found between baseline height and all-cause mortality (RR = 1.007; 95% CI 1.003–1.011; P = 0.002) over the follow-up period. Adjustments for possible confounding variables reduced this association only slightly (RR = 1.006; 95% CI 1.002–1.010; P = 0.007). In addition, height was positively associated with all cancer mortality and mortality from cancer unrelated to smoking. A Cox regression model with time-dependent covariates showed that relative risk for baseline height on mortality increased as the population aged. Comparison of genotypes of a longevity-associated single nucleotide polymorphism in FOXO3 showed that the longevity allele was inversely associated with height. This finding was consistent with prior findings in model organisms of aging. Height was also positively associated with fasting blood insulin level, a risk factor for mortality. Regression analysis of fasting insulin level (mIU/L) on height (cm) adjusting for the age both data were collected yielded a regression coefficient of 0.26 (95% CI 0.10–0.42; P = 0.001).
The more of you there is, the more of you there is to age.
But there’s another possibility involving internal temperature–since internal body temperature requires calories to maintain, people who “run hot” (that is, are naturally warmer) may burn more calories and tend to be thinner than people who tend to run cool, who may burn fewer calories and thus tend to weigh more. Eg, low body temperature linked to obesity in new study:
A new study has found that obese people (BMI >30) have lower body temperature during the day than normal weight people. The obese people had an average body temperature that was .63 degrees F cooler than normal weight people. The researchers calculated that this lower body temperature—which reflects a lower metabolic rate—would result in a body fat accumulation of approximately 160 grams per month, or four to five pounds a year, enough for the creeping weight gain many people experience.
There’s an interesting discussion in the link on thyroid issues that cause people to run cold and thus gain weight, and how some people lose weight with thyroid treatment.
On the other hand, this study found the opposite, and maybe the whole thing just washes out to women and men having different internal temperatures?
Obese people are–according to one study–more likely to suffer mood or mental disorders, which could also be triggered by an underlying health problem. They also suffer faster functional decline in old age:
Women had a higher prevalence of reported functional decline than men at the upper range of BMI categories (31.4% vs 14.3% for BMI > or =40). Women (odds ratio (OR) = 2.61, 95% confidence interval (CI) = 1.39-4.95) and men (OR = 3.32, 95% CI = 1.29-8.46) exhibited increased risk for any functional decline at BMI of 35 or greater. Weight loss of 10 pounds and weight gain of 20 pounds were also risk factors for any functional decline.
Note that gaining weight and losing weight were also related to decline, probably due to health problems that caused the weight fluctuations in the first place.
Of course, general physical decline and mental decline go hand-in-hand. Whether obesity causes declining health, declining health causes obesity, or some underlying third factor, like biological aging underlies both, I don’t know.
Anyway, I know this thought is a bit disjointed; it’s mostly just food for thought.
A species may live in relative equilibrium with its environment, hardly changing from generation to generation, for millions of years. Turtles, for example, have barely changed since the Cretaceous, when dinosaurs still roamed the Earth.
But if the environment changes–critically, if selective pressures change–then the species will change, too. This was most famously demonstrated with English moths, which changed color from white-and-black speckled to pure black when pollution darkened the trunks of the trees they lived on. To survive, these moths need to avoid being eaten by birds, so any moth that stands out against the tree trunks tends to get turned into an avian snack. Against light-colored trees, dark-colored moths stood out and were eaten. Against dark-colored trees, light-colored moths stand out.
This change did not require millions of years. Dark-colored moths were virtually unknown in 1810, but by 1895, 98% of the moths were black.
The time it takes for evolution to occur depends simply on A. The frequency of a trait in the population and B. How strongly you are selecting for (or against) it.
Let’s break this down a little bit. Within a species, there exists a great deal of genetic variation. Some of this variation happens because two parents with different genes get together and produce offspring with a combination of their genes. Some of this variation happens because of random errors–mutations–that occur during copying of the genetic code. Much of the “natural variation” we see today started as some kind of error that proved to be useful, or at least not harmful. For example, all humans originally had dark skin similar to modern Africans’, but random mutations in some of the folks who no longer lived in Africa gave them lighter skin, eventually producing “white” and “Asian” skin tones.
(These random mutations also happen in Africa, but there they are harmful and so don’t stick around.)
Natural selection can only act on the traits that are actually present in the population. If we tried to select for “ability to shoot x-ray lasers from our eyes,” we wouldn’t get very far, because no one actually has that mutation. By contrast, albinism is rare, but it definitely exists, and if for some reason we wanted to select for it, we certainly could. (The incidence of albinism among the Hopi Indians is high enough–1 in 200 Hopis vs. 1 in 20,000 Europeans generally and 1 in 30,000 Southern Europeans–for scientists to discuss whether the Hopi have been actively selecting for albinism. This still isn’t a lot of albinism, but since the American Southwest is not a good environment for pale skin, it’s something.)
You will have a much easier time selecting for traits that crop up more frequently in your population than traits that crop up rarely (or never).
Second, we have intensity–and variety–of selective pressure. What % of your population is getting removed by natural selection each year? If 50% of your moths get eaten by birds because they’re too light, you’ll get a much faster change than if only 10% of moths get eaten.
Selection doesn’t have to involve getting eaten, though. Perhaps some of your moths are moth Lotharios, seducing all of the moth ladies with their fuzzy antennae. Over time, the moth population will develop fuzzier antennae as these handsome males out-reproduce their less hirsute cousins.
No matter what kind of selection you have, nor what part of your curve it’s working on, all that ultimately matters is how many offspring each individual has. If white moths have more children than black moths, then you end up with more white moths. If black moths have more babies, then you get more black moths.
So what happens when you completely remove selective pressures from a population?
Back in 1968, ethologist John B. Calhoun set up an experiment popularly called “Mouse Utopia.” Four pairs of mice were given a large, comfortable habitat with no predators and plenty of food and water.
Predictably, the mouse population increased rapidly–once the mice were established in their new homes, their population doubled every 55 days. But after 211 days of explosive growth, reproduction began–mysteriously–to slow. For the next 245 days, the mouse population doubled only once every 145 days.
The birth rate continued to decline. As births and death reached parity, the mouse population stopped growing. Finally the last breeding female died, and the whole colony went extinct.
As I’ve mentioned before Israel is (AFAIK) the only developed country in the world with a TFR above replacement.
It has long been known that overcrowding leads to population stress and reduced reproduction, but overcrowding can only explain why the mouse population began to shrink–not why it died out. Surely by the time there were only a few breeding pairs left, things had become comfortable enough for the remaining mice to resume reproducing. Why did the population not stabilize at some comfortable level?
Professor Bruce Charlton suggests an alternative explanation: the removal of selective pressures on the mouse population resulted in increasing mutational load, until the entire population became too mutated to reproduce.
Unfortunately, randomly changing part of your genetic code is more likely to give you no skin than skintanium armor.
But only the worst genetic problems that never see the light of day. Plenty of mutations merely reduce fitness without actually killing you. Down Syndrome, famously, is caused by an extra copy of chromosome 21.
While a few traits–such as sex or eye color–can be simply modeled as influenced by only one or two genes, many traits–such as height or IQ–appear to be influenced by hundreds or thousands of genes:
Differences in human height is 60–80% heritable, according to several twin studies and has been considered polygenic since the Mendelian-biometrician debate a hundred years ago. A genome-wide association (GWA) study of more than 180,000 individuals has identified hundreds of genetic variants in at least 180 loci associated with adult human height. The number of individuals has since been expanded to 253,288 individuals and the number of genetic variants identified is 697 in 423 genetic loci.
Obviously most of these genes each plays only a small role in determining overall height (and this is of course holding environmental factors constant.) There are a few extreme conditions–gigantism and dwarfism–that are caused by single mutations, but the vast majority of height variation is caused by which particular mix of those 700 or so variants you happen to have.
The general figure for the heritability of IQ, according to an authoritative American Psychological Association report, is 0.45 for children, and rises to around 0.75 for late teens and adults. In simpler terms, IQ goes from being weakly correlated with genetics, for children, to being strongly correlated with genetics for late teens and adults. … Recent studies suggest that family and parenting characteristics are not significant contributors to variation in IQ scores; however, poor prenatal environment, malnutrition and disease can have deleterious effects.…
Despite intelligence having substantial heritability2 (0.54) and a confirmed polygenic nature, initial genetic studies were mostly underpowered3, 4, 5. Here we report a meta-analysis for intelligence of 78,308 individuals. We identify 336 associated SNPs (METAL P < 5 × 10−8) in 18 genomic loci, of which 15 are new. Around half of the SNPs are located inside a gene, implicating 22 genes, of which 11 are new findings. Gene-based analyses identified an additional 30 genes (MAGMA P < 2.73 × 10−6), of which all but one had not been implicated previously. We show that the identified genes are predominantly expressed in brain tissue, and pathway analysis indicates the involvement of genes regulating cell development (MAGMA competitive P = 3.5 × 10−6). Despite the well-known difference in twin-based heritability2 for intelligence in childhood (0.45) and adulthood (0.80), we show substantial genetic correlation (rg = 0.89, LD score regression P = 5.4 × 10−29). These findings provide new insight into the genetic architecture of intelligence.
The greater number of genes influence a trait, the harder they are to identify without extremely large studies, because any small group of people might not even have the same set of relevant genes.
High IQ correlates positively with a number of life outcomes, like health and longevity, while low IQ correlates with negative outcomes like disease, mental illness, and early death. Obviously this is in part because dumb people are more likely to make dumb choices which lead to death or disease, but IQ also correlates with choice-free matters like height and your ability to quickly press a button. Our brains are not some mysterious entities floating in a void, but physical parts of our bodies, and anything that affects our overall health and physical functioning is likely to also have an effect on our brains.
The study focused, for the first time, on rare, functional SNPs – rare because previous research had only considered common SNPs and functional because these are SNPs that are likely to cause differences in the creation of proteins.
The researchers did not find any individual protein-altering SNPs that met strict criteria for differences between the high-intelligence group and the control group. However, for SNPs that showed some difference between the groups, the rare allele was less frequently observed in the high intelligence group. This observation is consistent with research indicating that rare functional alleles are more often detrimental than beneficial to intelligence.
Greg Cochran has some interesting Thoughts on Genetic Load. (Currently, the most interesting candidate genes for potentially increasing IQ also have terrible side effects, like autism, Tay Sachs and Torsion Dystonia. The idea is that–perhaps–if you have only a few genes related to the condition, you get an IQ boost, but if you have too many, you get screwed.) Of course, even conventional high-IQ has a cost: increased maternal mortality (larger heads).
the difference between the fitness of an average genotype in a population and the fitness of some reference genotype, which may be either the best present in a population, or may be the theoretically optimal genotype. … Deleterious mutation load is the main contributing factor to genetic load overall. Most mutations are deleterious, and occur at a high rate.
There’s math, if you want it.
Normally, genetic mutations are removed from the population at a rate determined by how bad they are. Really bad mutations kill you instantly, and so are never born. Slightly less bad mutations might survive, but never reproduce. Mutations that are only a little bit deleterious might have no obvious effect, but result in having slightly fewer children than your neighbors. Over many generations, this mutation will eventually disappear.
(Some mutations are more complicated–sickle cell, for example, is protective against malaria if you have only one copy of the mutation, but gives you sickle cell anemia if you have two.)
Throughout history, infant mortality was our single biggest killer. For example, here is some data from Jakubany, a town in the Carpathian Mountains:
We can see that, prior to the 1900s, the town’s infant mortality rate stayed consistently above 20%, and often peaked near 80%.
When I first ran a calculation of the infant mortality rate, I could not believe certain of the intermediate results. I recompiled all of the data and recalculated … with the same astounding result – 50.4% of the children born in Jakubany between the years 1772 and 1890 would diebefore reaching ten years of age! …one out of every two! Further, over the same 118 year period, of the 13306 children who were born, 2958 died (~22 %) before reaching the age of one.
Historical infant mortality rates can be difficult to calculate in part because they were so high, people didn’t always bother to record infant deaths. And since infants are small and their bones delicate, their burials are not as easy to find as adults’. Nevertheless, Wikipedia estimates that Paleolithic man had an average life expectancy of 33 years:
Based on the data from recent hunter-gatherer populations, it is estimated that at 15, life expectancy was an additional 39 years (total 54), with a 0.60 probability of reaching 15.
In other words, a 40% chance of dying in childhood. (Not exactly the same as infant mortality, but close.)
Wikipedia gives similarly dismal stats for life expectancy in the Neolithic (20-33), Bronze and Iron ages (26), Classical Greece(28 or 25), Classical Rome (20-30), Pre-Columbian Southwest US (25-30), Medieval Islamic Caliphate (35), Late Medieval English Peerage (30), early modern England (33-40), and the whole world in 1900 (31).
Over at ThoughtCo: Surviving Infancy in the Middle Ages, the author reports estimates for between 30 and 50% infant mortality rates. I recall a study on Anasazi nutrition which I sadly can’t locate right now, which found 100% malnutrition rates among adults (based on enamel hypoplasias,) and 50% infant mortality.
As Priceonomics notes, the main driver of increasing global life expectancy–48 years in 1950 and 71.5 years in 2014 (according to Wikipedia)–has been a massive decrease in infant mortality. The average life expectancy of an American newborn back in 1900 was only 47 and a half years, whereas a 60 year old could expect to live to be 75. In 1998, the average infant could expect to live to about 75, and the average 60 year old could expect to live to about 80.
Michael A Woodley suggests that what was going on [in the Mouse experiment] was much more likely to be mutation accumulation; with deleterious (but non-fatal) genes incrementally accumulating with each generation and generating a wide range of increasingly maladaptive behavioural pathologies; this process rapidly overwhelming and destroying the population before any beneficial mutations could emerge to ‘save; the colony from extinction. …
The reason why mouse utopia might produce so rapid and extreme a mutation accumulation is that wild mice naturally suffer very high mortality rates from predation. …
Thus mutation selection balance is in operation among wild mice, with very high mortality rates continually weeding-out the high rate of spontaneously-occurring new mutations (especially among males) – with typically only a small and relatively mutation-free proportion of the (large numbers of) offspring surviving to reproduce; and a minority of the most active and healthy (mutation free) males siring the bulk of each generation.
However, in Mouse Utopia, there is no predation and all the other causes of mortality (eg. Starvation, violence from other mice) are reduced to a minimum – so the frequent mutations just accumulate, generation upon generation – randomly producing all sorts of pathological (maladaptive) behaviours.
Today, almost everyone in the developed world has plenty of food, a comfortable home, and doesn’t have to worry about dying of bubonic plague. We live in humantopia, where the biggest factor influencing how many kids you have is how many you want to have.
Back in 1930, infant mortality rates were highest among the children of unskilled manual laborers, and lowest among the children of professionals (IIRC, this is Brittish data.) Today, infant mortality is almost non-existent, but voluntary childlessness has now inverted this phenomena:
Yes, the percent of childless women appears to have declined since 1994, but the overall pattern of who is having children still holds. Further, while only 8% of women with post graduate degrees have 4 or more children, 26% of those who never graduated from highschool have 4+ kids. Meanwhile, the age of first-time moms has continued to climb.
Take a moment to consider the high-infant mortality situation: an average couple has a dozen children. Four of them, by random good luck, inherit a good combination of the couple’s genes and turn out healthy and smart. Four, by random bad luck, get a less lucky combination of genes and turn out not particularly healthy or smart. And four, by very bad luck, get some unpleasant mutations that render them quite unhealthy and rather dull.
Infant mortality claims half their children, taking the least healthy. They are left with 4 bright children and 2 moderately intelligent children. The three brightest children succeed at life, marry well, and end up with several healthy, surviving children of their own, while the moderately intelligent do okay and end up with a couple of children.
On average, society’s overall health and IQ should hold steady or even increase over time, depending on how strong the selective pressures actually are.
Or consider a consanguineous couple with a high risk of genetic birth defects: perhaps a full 80% of their children die, but 20% turn out healthy and survive.
Today, by contrast, your average couple has two children. One of them is lucky, healthy, and smart. The other is unlucky, unhealthy, and dumb. Both survive. The lucky kid goes to college, majors in underwater intersectionist basket-weaving, and has one kid at age 40. That kid has Down Syndrome and never reproduces. The unlucky kid can’t keep a job, has chronic health problems, and 3 children by three different partners.
Your consanguineous couple migrates from war-torn Somalia to Minnesota. They still have 12 kids, but three of them are autistic with IQs below the official retardation threshold. “We never had this back in Somalia,” they cry. “We don’t even have a word for it.”
People normally think of dysgenics as merely “the dumb outbreed the smart,” but genetic load applies to everyone–men and women, smart and dull, black and white, young and especially old–because we all make random transcription errors when copying our DNA.
I could offer a list of signs of increasing genetic load, but there’s no way to avoid cherry-picking trends I already know are happening, like falling sperm counts or rising (diagnosed) autism rates, so I’ll skip that. You may substitute your own list of “obvious signs society is falling apart at the genes” if you so desire.
Nevertheless, the transition from 30% (or greater) infant mortality to almost 0% is amazing, both on a technical level and because it heralds an unprecedented era in human evolution. The selective pressures on today’s people are massively different from those our ancestors faced, simply because our ancestors’ biggest filter was infant mortality. Unless infant mortality acted completely at random–taking the genetically loaded and unloaded alike–or on factors completely irrelevant to load, the elimination of infant mortality must continuously increase the genetic load in the human population. Over time, if that load is not selected out–say, through more people being too unhealthy to reproduce–then we will end up with an increasing population of physically sick, maladjusted, mentally ill, and low-IQ people.
If all of the above is correct, then I see only 4 ways out:
Do nothing: Genetic load increases until the population is non-functional and collapses, resulting in a return of Malthusian conditions, invasion by stronger neighbors, or extinction.
Sterilization or other weeding out of high-load people, coupled with higher fertility by low-load people
Abortion of high load fetuses
#1 sounds unpleasant, and #2 would result in masses of unhappy people. We don’t have the technology for #4, yet. I don’t think the technology is quite there for #2, either, but it’s much closer–we can certainly test for many of the deleterious mutations that we do know of.
A few years ago I went through a nutrition kick and read about a dozen books about food. Today I came across a graph that perfectly represents what I learned:
Basically, everything will kill you.
There are three major schools of thought on what’s wrong with modern diets: 1. fats, 2. carbs (sugars,) or 3. proteins.
Unfortunately, all food is composed of fats+carbs+proteins.
Ultimately, the best advice I came across was just to stop stressing out. We don’t really know the best foods to eat, and a lot of official health advice that people have tried to follow actually turned out to be quite bad, but we have a decent intuition that you shouldn’t eat cupcakes for lunch.
Dieting doesn’t really do much for the vast majority of people, but it’s a huge industry that sucks up a ton of time and money. How much you weigh has a lot more to do with factors outside of your control, like genetics or whether there’s a famine going on in your area right now.
You’re probably not going to do yourself any favors stressing out about food or eating a bunch of things you don’t like.
Remember the 20/80 rule: 80% of the effect comes from 20% of the effort, and vice versa. Eating reasonable quantities of good food and avoiding junk will do far more good than substituting chicken breast for chicken thighs in everything you cook.
There is definitely an ethnic component to diet–eg, people whose ancestors historically ate grain are better adapted to it than people who didn’t. So if you’re eating a whole bunch of stuff your ancestors didn’t and you don’t feel so good, that may be the problem.
Personally, I am wary of refined sugars in my foods, but I am very sensitive to sugars. (I don’t even drink juice.) But this may just be me. Pay attention to your body and how you feel after eating different kinds of food, and eat what makes you feel good.
It’s been a rough day. So I’m going to complain about something totally mundane: salads.
I was recently privy to a conversation between two older women on why it is so hard to stay thin in the South: lack of good salads. Apparently when you go to a southern restaurant, they serve a big piece of meat (often deep-fried steak) a lump of mashed potatoes and gravy, and a finger-bowl with 5 pieces of iceberg lettuce, an orange tomato, and a slathering of dressing.
Sounds good to me.
Now, if you like salads, that’s fine. You’re still welcome here. Personally, I just don’t see the point. The darn things don’t have any calories!
From an evolutionary perspective, obviously food provides two things: calories and nutrients. There may be some foods that are mostly calorie but little nutrient (eg, honey) and some foods that are nutrient but no calorie (salt isn’t exactly a food, but it otherwise fits the bill.)
Food doesn’t seem like it should be that complicated–surely we’ve evolved to eat effectively by now. So any difficulties we have (besides just getting the food) are likely us over-thinking the matter. There’s no problem getting people to eat high-calorie foods, because they taste good. It’s also not hard to get people to eat salt–it also tastes good.
But people seem to have this ambivalent relationship with salads. What’s so important about eating a bunch of leaves with no calories and a vaguely unpleasant flavor? Can’t a just eat a nice potato? Or some corn? Or asparagus?
Don’t get me wrong. I don’t hate vegetables. Just everything that goes in a salad. Heck, I’ll even eat most salad fixins if they’re cooked. I won’t turn down fried green tomatoes, you know.
While there’s nothing wrong with enjoying a bowl of lettuce if that’s your think, I think our society has gone down a fundamentally wrong collective path when it comes to nutrition wisdom. The idea here is that your hunger drive is this insatiable beast that will force you to consume as much food as possible, making you overweight and giving you a heart attack, and so the only way to save yourself is to trick the beast by filling your stomach with fluffy, zero-calorie plants until there isn’t anymore room.
This seems to me like the direct opposite of what you should be doing. See, I assume your body isn’t an idiot, and can figure out whether you’ve just eaten something full of calories, and so should go sleep for a bit, or if you just ate some leaves and should keep looking for food.
I recently tried increasing the amount of butter I eat each day, and the result was I felt extremely full an didn’t want to eat dinner. Butter is a great way to almost arbitrarily increase the amount of calories per volume of food.
If you’re wondering about my weight, well, let’s just say that despite the butter, never going on a diet, and abhorring salads, I’m still not overweight–but this is largely genetic. (I should note though that I don’t eat many sweets at all.)
Obviously I am not a nutritionist, a dietician, nor a doctor. I’m not a good source for health advice. But it seems to me that increasing or decreasing the number of sweats you eat per day probably has a bigger impact on your overall weight than adding or subtracting a salad.
By the way, guys, I have not been able to write as much as I would like to, lately, so I am dropping the Wed. post and only going to be updating 4 times a week. Hopefully I’ll get more time soon. :)
It is very easy to dismiss Appalachia’s problems by waving a hand and saying, “West Virginia has an average IQ of 98.”
But there are a hell of a lot of states that have average IQs lower than West Virginia, but are still doing better. For that matter, France has a lower average IQ, and France is still doing pretty well for itself.
So we’re going to discuss some alternative theories.
(And my apologies to WV for using it as a stand-in for the entirety of Greater Appalachia, which, as discussed a few days ago, includes parts of a great number of states, from southern Pennsylvania to eastern Texas. Unfortunately for me, only WV, Kentucky, and Tennessee fall entirely within Greater Appalachia, and since it is much easier to find data aggregated by state than by county or “cultural region,” I’ve been dependent on these states for much of my research.)
At any rate, it’s no secret that Appalachia is not doing all that well:
The Death of Manufacturing
Having your local industries decimated by foreign competition and workforces laid off due to automation does bad things to your economy. These things look great on paper, where increasing efficiency and specialization result in higher profits for factory owners, but tend to work out very badly for the folks who have lost their jobs.
Indeed, the US has barely even begun thinking about how we plan on dealing with the effects of continued automation. Do 90% of people simply become irrelevant as robots take over their jobs? Neither “welfare for everyone” nor “everybody starves” seem like viable solutions. So far, most politicians have defaulted to platitudes about how “more education” will be the solution to all our woes, but how you turn a 45-year old low-IQ meat packer who just got replaced by a robot into a functional member of the “information economy” remains to be seen.
Of course, economic downturns happen; fads come and go; industries go in and out. The Rust Belt, according to Wikipedia, runs north of Greater Appalachia, through Pennsylvania, New York, northern Ohio, Detroit, etc. These areas have been struggling for decades, but many of them, like Pittsburgh, are starting to recover. Appalachia, by contrast, is still struggling.
This may just be a side effect of Appalachia being more rural; Pittsburgh is a large city with millions of people employed in a variety of industries. If one goes out, others can, hopefully, replace it. But in a rural area with only one or two large employers–sometimes literal “company towns” built near mines–if the main industry goes out, you may not get anything coming back in.
Appalachia has geography that makes it difficult to transport goods in and out as cheaply as you can transport them elsewhere, but then, so does Switzerland, and Switzerland seems to be doing pretty well. (Of course, Switzerland seems to have specialized in small, expensive, easy to transport luxury goods like watches, chocolate, and bank deposits, while Appalachia has specialized in cheap, heavy, unpleasant to produce goods like coal.)
But I am being over-generous: America killed its manufacturing.
We killed it because our upper classes look down their noses at manufacturing; such jobs are unpleasant and low-class, and therefore they cannot understand that for some people, these jobs are the only thing standing between them and poverty. Despite the occasional protest against outsourcing, our government–Republicans and Democrats–has forged ahead with its free-trade, send-everything-to-China-and-fire-the-Americans, import-Mexicans-and-fire-the-Americans, and then reap-the-profits agenda.
Too Much Regulation
Over-regulation begins with the best of intentions, then breaks your industries. Nobody wants to die in a fire or a cave-in, but you can’t regulate away all risk and still get anything done.
Every regulation, every record-keeping requirement, every mandated compliance, is a tax on efficiency–and thus on profits. Some regulation, of course, probably increases profits–for example, I am more likely to buy a medicine if I have some guarantee that it isn’t made with rat poison. But beyond that guarantee, increasing requirements that companies test all of their products for toxins imposes more costs than the companies recoup–at which point, companies tend to leave for more profitable climes.
Likewise, while health insurance sounds great, running it through employers is madness. Companies should devote their efforts to making products (or services,) not hiring expensive lawyers and accountants to work through the intricacies of health care law compliance and income withholding.
The few manufacturers left in Appalachia (and probably elsewhere in the country) have adopted a creative policy to avoid paying health insurance costs for their workers: fire everyone just before they qualify for insurance. By hiring only temp workers, outsourcing everything, and only letting employees bill 20 hours a week, manufacturers avoid complying with employee-related regulations.
Oh, sure, you might think you could just get two 20-hour a week jobs, but that requires being able to schedule two different jobs. When you have no idea whether you are going to be working every day or not until you show up for work at 7 AM, and you’ll get fired if you don’t show up, getting a second job simply isn’t an option.
I have been talking about over-regulation for over a decade, but it is the sort of issue that it is difficult to get people worked up over, much less make them understand if they haven’t lived it. Democrats just look aghast that anyone would suggest that more regulations won’t lead automatically to more goodness, and Republicans favor whichever policies lead to higher profits, without any concern for the needs of workers.
New York Times columnist Thomas L. Friedman recently encapsulated this view in a piece called “Start-Ups, Not Bailouts.” His argument: Let tired old companies that do commodity manufacturing die if they have to. If Washington really wants to create jobs, he wrote, it should back startups.
Friedman is wrong. Startups are a wonderful thing, but they cannot by themselves increase tech employment. Equally important is what comes after that mythical moment of creation in the garage, as technology goes from prototype to mass production. This is the phase where companies scale up. They work out design details, figure out how to make things affordably, build factories, and hire people by the thousands. Scaling is hard work but necessary to make innovation matter.
The scaling process is no longer happening in the U.S. And as long as that’s the case, plowing capital into young companies that build their factories elsewhere will continue to yield a bad return in terms of American jobs. …
As time passed, wages and health-care costs rose in the U.S. China opened up. American companies discovered that they could have their manufacturing and even their engineering done more cheaply overseas. When they did so, margins improved. Management was happy, and so were stockholders. Growth continued, even more profitably. But the job machine began sputtering.
The 10X Factor
Today, manufacturing employment in the U.S. computer industry is about 166,000, lower than it was before the first PC, the MITS Altair 2800, was assembled in 1975 (figure-B). Meanwhile, a very effective computer manufacturing industry has emerged in Asia, employing about 1.5 million workers—factory employees, engineers, and managers. The largest of these companies is Hon Hai Precision Industry, also known as Foxconn. The company has grown at an astounding rate, first in Taiwan and later in China. Its revenues last year were $62 billion, larger than Apple (AAPL), Microsoft (MSFT), Dell (DELL), or Intel. Foxconn employs over 800,000 people, more than the combined worldwide head count of Apple, Dell, Microsoft, Hewlett-Packard (HPQ), Intel, and Sony (SNE) (figure-C).
Companies don’t scale up in the US because dealing with the regulations is monstrous. Anyone who has worked in industry can tell you this; heck, even Kim Levine, author of Millionaire Mommy (don’t laugh at the title, it’s actually a pretty good book,) touches on the subject. Levine notes that early in the process of scaling up the manufacture of her microwavable pillows, she had dreams of owning her own little factory, but once she learned about all of the regulations she would have to comply with, she decided that would be a horrible nightmare.
I don’t have time to go into more detail on the subject, but here is a related post from Slate Star Codex:
I started the book with the question: what exactly do real estate developers do? …
As best I can tell, the developer’s job is coordination. This often means blatant lies. The usual process goes like this: the bank would be happy to lend you the money as long as you have guaranteed renters. The renters would be happy to sign up as long as you show them a design. The architect would be happy to design the building as long as you tell them what the government’s allowing. The government would be happy to give you your permit as long as you have a construction company lined up. And the construction company would be happy to sign on with you as long as you have the money from the bank in your pocket. Or some kind of complicated multi-step catch-22 like that. The solution – or at least Trump’s solution – is to tell everybody that all the other players have agreed and the deal is completely done except for their signature. The trick is to lie to the right people in the right order, so that by the time somebody checks to see whether they’ve been conned, you actually do have the signatures you told them that you had. The whole thing sounds very stressful.
The developer’s other job is dealing with regulations. The way Trump tells it, there are so many regulations on development in New York City in particular and America in general that erecting anything larger than a folding chair requires the full resources of a multibillion dollar company and half the law firms in Manhattan. Once the government grants approval it’s likely to add on new conditions when you’re halfway done building the skyscraper, insist on bizarre provisions that gain it nothing but completely ruin your chance of making a profit, or just stonewall you for the heck of it if you didn’t donate to the right people’s campaigns last year. Reading about the system makes me both grateful and astonished that any structures have ever been erected in the United States at all, and somewhat worried that if anything ever happens to Donald Trump and a few of his close friends, the country will lose the ability to legally construct artificial shelter and we will all have to go back to living in caves.
The current socio-economic system is designed by rootless, soulless, high-IQ, low-time preference, money-/status-grubbing homo economicus for benefit of those same homo economicus. It is a system for designed for intelligent sociopaths. Those who are rootless with high-IQ and low-time preference can succeed rather well in this system, but it destroys those who need rootedness or those who are who are low-IQ or high time preference.
Kevin says, “Nothing happened to them. There wasn’t some awful disaster.” But he’s wrong, there was a disaster, but no just one, multiple related disasters all occurring simultaneously. …
Every support the white working class (and for that matter the black working class) had vanished within less than a generation. There was a concerted effort to destroy these supports, and this effort succeeded. Through minimal fault of their own the white working class was left with nothing holding them up.
Personally, I lack good first-hand insight into working class cultural matters; I have no idea how much Hollywood mores have penetrated and changed people’s practical lives in rural Arkansas. I must defer, there, to people more knowledgeable than myself.
While death rates have been falling for the rest of the developed world and for America’s blacks and Hispanics, death rates have been rising over the past couple of decades for American whites–middle aged and younger white women, to be exact. They’re up pretty much everywhere, but Appalachia has been the hardest hit.
The first thing everyone seems to cite in response is meth. And indeed, it appears that there is a lot of meth in Appalachia (and a lot of other places):
But I don’t think this explains why death rates are headed up among women. Maybe I’m wrong, (I know rather little about drug use patterns,) but it doesn’t seem like women would be more likely to OD on meth than men. If anything, I get the impression that illegal drugs that fuck you up and kill you are more of a guy thing than a gal thing. Men are probably far more likely to die of alcohol-related causes like drunk driving and cirrhosis of the liver than women, for example, and you don’t even have to deal with criminals to get alcohol.
So, while I agree that drugs appear to be a rising problem, I don’t think they are the problem. (And even still, drug overdoses only beg the deeper question of why more people are using drugs.)
As I mentioned a few posts ago, SpottedToad ran the death rate data by county and came up with three significant correlations: poverty, obesity, and disability. (I don’t know if he looked at meth/drug use by county.)
I, for one, am not surprised to find out that disabled, overweight people are not in the best of health.
Here are SpottedToad’s graphs, showing the correlations he found–I recommend reading his entire post.
Obviously one possibility is that unemployed people feel stressed, binge on cheap crap, get sick, get SSDI, and then die.
But then why are death rates only going up for white women? Plenty of white men are unemployed; plenty of black men and women are poor, fat, and disabled.
Obviously there are a ton of possible confounders–perhaps poor people just happen to make bad life decisions that both make them poor and result in bad health, like smoking cigarettes. Perhaps poor people have worse access to health care, or perhaps being really sick makes people poor. Or maybe the high death rates just happen to be concentrated among people who happen to be fat for purely biological reasons–it appears that the British are among the fattest peoples in Europe, and the Scottish are fatter than the British average. (Before anyone gets their hackles up, I should also note that white Americans are slightly fatter than Scots.)
And as many people have noted, SSI/SSDI are welfare for people who wouldn’t otherwise qualify.
In my correspondence with an observing teacher in the hill country of western Pennsylvania, she reported that in her school a condition was frequent in the families, namely, that the children could not carry prescribed textbook work because of low mentality. This is often spoken of, though incorrectly, as delayed mentality. In one family of eight children only the first child was normal. The mental and physical injuries were increasingly severe. The eighth child had both hare-lip and a double cleft palate. The seventh child had cleft palate and the sixth was a near idiot. The second to fifth, inclusive, presented increasing degrees of disturbed mentality.
In my cabin-to-cabin studies of families living in the hill country of North Carolina, I found many cases of physical and mental injury. Among these cases arthritis and heart disease were very frequent, many individuals being bed ridden. A typical case is shown in the upper part of figure 148 [sorry, I can’t show you the picture, but it is not too important,] of a father and mother and their one child. The child is so badly injured that he is mentally an imbecile. They are living on very poor land where even the vegetable growth is scant and of poor quality. Their food consisted largely of corn bread, corn syrup, some fat pork, and strong coffee.
As the title of the book implies, Dr. Price’s thesis is that bad nutrition leads to physical degeneration. (Which, of course, it does.) He was working back when folks were just discovering vitamins and figuring out that diseases like curvy, pellagra, and beriberi were really nutritional deficiencies; figuring out the chemical compositions necessary for fertile soil; and before the widespread adoption of artificial fertilizers (possibly before their invention.) Dr. Price thought that American soils, particularly in areas that had been farmed for longer or had warmer, wetter weather, had lost much of their nutritional content:
My studies of this problem of reduced capacity of sols for maintaining animal life have included correspondence with the agricultural departments of all of the states of the union with regard to maintaining cattle. The reduction in capacity ranges from 20 to 90 per cent… I am advised that it would cost $50 an acre to replace the phosphorus alone that has been shipped off the land in large areas.
There is an important fact that we should consider; namely, the role that has been played by glaciers in grinding up and distributing rock formations. One glacier, the movement of which affected the surface soil of Ohio, covered only about half the state; namely, that area west of a line starting east of Cleveland and extending diagonally west across the state to Cincinnati. It is important for us to note that, in the areas extending south and east of this line, several expressions of degeneration are higher than in the areas north and west of this line. The infant mortality per thousand live births in 1939 is informative. In the counties north and west of that line, the death rate was from 40 49 per thousand live births; whereas, in the area south and east of that Line, the death rate was from 50 to 87.
It is of particular interest to us as dentists that studies show the percentage of teeth with caries to be much higher southeast of this line than northwest of it.
So I Googled around, and found this map of the last glaciation of Ohio:
Okay, I lied, it’s obviously a map of ACT scores. But it actually does look a lot like the glaciation map.
Australia’s soils, from what I understand, are particularly bad–because the continent’s rocks are so geologically old, the soil is extremely low in certain key nutrients, like iodine. Even with iodine supplementation, deficiencies are still occasionally a problem.
Many of the soils in the state are steeply sloping and tend to be shallow, acidic, and deficient in available phosphorus. As early as the late 19th century progressive farmers used rock phosphate, bone meal, and lime to increase crop yield and quality. Since the mid-20th century farmers have used soil tests and corrected mineral deficiencies. Most crop land and much of the pasture land are no longer severely deficient in essential nutrients. West Virginia has always been primarily a livestock producing state. Land on steep slopes is best suited to producing pasture and hay.
Nutritional deficiencies due to poor soil could have been a problem a century ago, just as Pellagra and hookworms were a problem, but they seem unlikely to be a big deal today, given both modern fertilizers and our habit of buying foods shipped in from California.
Looking at statistics from 2005 (the latest for which mortality rates are available) the researchers found that though coal mining brought in about $8 billion to the state coffers of Appalachian states, the costs of the shorter life-spans associated with coal mining operations were nearly $17 billion to $84.5 billion.
Coal mining areas in Appalachia were found to have nearly 11,000 more deaths each year than other places in the nation, with 2,300 of those attributable to environmental factors such as air and water pollution.
The Nation reports that:
In 2010, an explosion at the Upper Big Branch coal mine in southern West Virginia killed twenty-nine miners. Later that year, an explosion at a West Virginia chemical plant killed two workers and released toxic fumes into the surrounding areas. This past year, West Virginia led the nation in coal-mining deaths. …
One study found that residents of areas surrounding mountaintop-removal coal mines “had significantly higher mortality rates, total poverty rates and child poverty rates every year compared to other…counties.” Another study found that compared to residents of other areas in the state, residents of the state’s coal-mining regions were 70 percent more likely to suffer from kidney disease, over 60 percent more likely to develop obstructive lung diseases such as emphysema and 30 percent likelier to have high blood pressure.
In 2014, the Elk River Chemical spill left 300,000 residents of West Virginia without potable water. Five months later, another spill happened at the same site, the fourth in five years. (The chemicals involved are used int he processing/washing of coal.)
Overloaded coal trucks are a perpetual menace on the narrow, winding roads of the Appalachian coalfields. From 2000 to 2004, there were more than seven hundred accidents involving coal trucks in Kentucky alone; fifty-three people died, and more than five hundred were injured. …
After the coal is washed, a slurry of impurities, coal dust, and chemical agents used in the process remains. This liquid waste, called “coal sludge” or “slurry,” is often injected into abandoned underground mines, a practice that can lead to groundwater contamination. … In public hearings, many coalfield residents have attributed their health problems to water wells polluted after the coal mining industry “disposes” its liquid waste by injecting coal slurry underground. The primary disposal practice for coal slurry is to store it in vast unlined lagoons or surface impoundments created near mountaintop-removal mines. Hundreds of these slurry impoundments are scattered across the Appalachian coalfields. Individual impoundments have been permitted to store billions of gallons of waste. … In 2000 a slurry impoundment operated by the Martin County Coal Company in Kentucky broke through into abandoned mineworks, out old mine portals, and into tributary streams of the Big Sandy River. More than 300 million gallons of coal slurry fouled the waterway for a hundred miles downriver.
So, living near a coal mine is probably bad for your health.
Concentration of Land
Wikipedia claims that land in Appalachia (or maybe it was just WV) is highly concentrated in just a few hands–one of those being the government, as much of Appalachia is national parks and forests and the like. Of course, this could be an effect rather than a cause of poverty.
Appalachia may be “isolated” and “rural,” but it’s quite close to a great many cities and universities. Parts of West Virginia are close enough to DC that people apparently commute between them.
In the early 1900s, so many people left Appalachia for the industrial cities of the Northeast and Midwest that U.S. Route 23 and Interstate 75 became known as the “Hillbilly Highway.” (BTW, I don’t think Appalachians like being called “hillbillies.”)
Compared to Appalachian areas in Arkansas or Oklahoma, West Virginia and Kentucky are particularly close to the industrial regions and coastal universities. As a result, they may have lost a far larger number of their brightest and most determined citizens.
While the Appalachian states don’t have particularly low IQs, their IQ curve seems likely to be narrower than other states’. West Virginia, for example, is only about 3% black, 1% Hispanic, and 0.5% Asian. MA, by contrast, is 9% black, 11% Hispanic, and 6% Asian. Blacks and Hispanics tend to score lower than average on IQ tests, and Asians tend to score higher, potentially giving MA more people scoring both above and below average, while WV may have more people scoring right around the middle of the distribution. With its brightest folks heading to universities outside the region, Appalachia may continue to struggle.
And finally, yes, maybe there is just something about the kinds of people who live in Appalachia that predispose them to certain ailments, like smoking or over-eating. Perhaps the kinds of people who end up working in coal mines are also the kinds of people who are predisposed to get cancer or use drugs. I don’t know enough people from the area to know either way.
To the people of Appalachia: I wish you health and happiness.
I remember when I first heard about epigenetics–the concept sounded awesome.
Now I cringe at the word.
To over simplify, “epigenetics” refers to biological processes that help turn on and off specific parts of DNA. For example, while every cell in your body (except sperm and eggs and I think blood cells?) have identical DNA, they obviously do different stuff. Eyeball cells and brain cells and muscle cells are all coded from the exact same DNA, but epigenetic factors make sure you don’t end up with muscles wiggling around in your eye sockets–or as an undifferentiated mass of slime.
If external environmental things can have epigenetic effects, I’d expect cancer to be a biggie, due to cell division and differentiation being epigenetic.
What epigenetics probably doesn’t do is everything people want it to do.
There’s a history, here, of people really wanting genetics to do things it doesn’t–to impose free will onto it.* Lamarck can be forgiven–we didn’t know about DNA back then. His theory was that an organism can pass on characteristics that it acquired during its lifetime to its offspring, thus driving evolution. The classic example given is that if a giraffe stretches its neck to reach leaves high up in the trees, its descendants will be born with long necks. It’s not a bad theory for a guy born in the mid 1700s, but science has advanced a bit since then.
The USSR put substantial resources into trying to make environmental effects show up in one’s descendants–including shooting anyone who disagreed.
Trofim Lysenko, a Soviet agronomist, claimed to be able to make wheat that would grow in winter–and pass on the trait to its offspring–by exposing the wheat seeds to cold. Of course, if that actually worked, Europeans would have developed cold-weather wheat thousands of years ago.
Lysenko was essentially the USSR’s version of an Affirmative Action hire:
“By the late 1920s, the Soviet political leaders had given their support to Lysenko. This support was a consequence, in part, of policies put in place by the Communist Party to rapidly promote members of the proletariat into leadership positions in agriculture, science and industry. Party officials were looking for promising candidates with backgrounds similar to Lysenko’s: born of a peasant family, without formal academic training or affiliations to the academic community.” (From the Wikipedia page on Lysenko)
In 1940, Lysenko became director of the USSR’s Academy of Science’s Institute of Genetics–a position he would hold until 1964. In 1948, scientific dissent from Lysenkoism was formally outlawed.
“From 1934 to 1940, under Lysenko’s admonitions and with Stalin’s approval, many geneticists were executed (including Isaak Agol, Solomon Levit, Grigorii Levitskii, Georgii Karpechenko and Georgii Nadson) or sent to labor camps. The famous Soviet geneticist Nikolai Vavilov was arrested in 1940 and died in prison in 1943. Hermann Joseph Muller (and his teachings about genetics) was criticized as a bourgeois, capitalist, imperialist, and promoting fascism so he left the USSR, to return to the USA via Republican Spain.
In 1948, genetics was officially declared “a bourgeois pseudoscience”; all geneticists were fired from their jobs (some were also arrested), and all genetic research was discontinued.” (From the Wikipedia page on Lysenkoism.)
Alas, the Wikipedia does not tell me if anyone died from Lyskenkoism itself, say, after their crops failed, but I hear the USSR doesn’t have a great agricultural record.
Lysenko got kicked out in the 60s, but his theories have returned in the form of SJW-inspired claims of the magic of epigenetics to explain how any differences in average group performance or behavior is actually the fault of long-dead white people. Eg:
” The science of epigenetics, literally “above the gene,” proposes that we pass along more than DNA in our genes; it suggests that our genes can carry memories of trauma experienced by our ancestors and can influence how we react to trauma and stress.”
That’s a bold statement. At least Pember is making Walker’s argument for him.
Of course, that’s not actually what epigenetics says, but I’ll get to that in a bit.
“The Academy of Pediatrics reports that the way genes work in our bodies determines neuroendocrine structure and is strongly influenced by experience.”
That’s an interesting source. While I am sure the A of P knows its stuff, their specialty is medical care for small children, not genetics. Why did Pember not use an authority on genetics?
Note: when thinking about whether or not to trust an article’s science claims, consider the sources they use. If they don’t cite a source or cite an unusual, obscure, or less-than-authoritative source, then there’s a good chance they are lying or cherry-picking data to make a claim that is not actually backed up by the bulk of findings in the field. Notice that Pember does not provide a link to the A of P’s report on the subject, nor provide any other information so that an interested reader can go read the full report.
Wikipedia is actually a decent source on most subjects. Not perfect, of course, but it is usually decent. If I were writing science articles for pay, I would have subscriptions to major science journals and devote part of my day to reading them, as that would be my job. Since I’m just a dude with a blog who doesn’t get paid and so can’t afford a lot of journal memberships and has to do a real job for most of the day, I use a lot of Wikipedia. Sorry.
Also, I just want to note that the structure of this sentence is really wonky. “The way genes work in our bodies”? As opposed to how they work outside of our bodies? Do I have a bunch of DNA running around building neurotransmitters in the carpet or something? Written properly, this sentence would read, “According to the A of P, genes determine neuroenodcrine structures, in a process strongly influenced by experience.”
“Trauma experienced by earlier generations can influence the structure of our genes, making them more likely to “switch on” negative responses to stress and trauma.”
Pember does not clarify whether she is continuing to cite from the A of P, or just giving her own opinions. The structure of the paragraph implies that this statement comes from the A of P, but again, no link to the original source is given, so I am hard pressed to figure out which it is.
At any rate, this doesn’t sound like something the A of P would say, because it is obviously and blatantly incorrect. Trauma *may* affect the structure of one’s epigenetics, but not the structure of one’s genes. The difference is rather large. Viruses and ionizing radiation can change the structure of your DNA, but “trauma” won’t.
” The now famous 1998 ACES study conducted by the Centers for Disease Control (CDC) and Kaiser Permanente showed that such adverse experiences could contribute to mental and physical illness.”
Um, no shit? Is this one of those cases of paying smart people tons of money to tell us grass is green and sky is blue? Also, that’s a really funny definition of “famous.” Looks like the author is trying to claim her sources have more authority than they actually do.
“Folks in Indian country wonder what took science so long to catch up with traditional Native knowledge.”
“According to Bitsoi, epigenetics is beginning to uncover scientific proof that intergenerational trauma is real. Historical trauma, therefore, can be seen as a contributing cause in the development of illnesses such as PTSD, depression and type 2 diabetes.”
Okay, do you know what epigenetics actually shows?
The experiment Wikipedia cites is of male mice who were trained to fear a certain smell by giving them small electric shocks when they smelled the smell. The children of these mice, conceived after the foot-shocking was finished, startled in response to the smell–they had inherited their father’s epigenetic markers that enhanced their response to that specific smell.
It’s a big jump from “mice startle at smells” to “causes PTSD.” This is a big jump in particular because of two things:
1. Your epigenetics change all the time. It’s like learning. You don’t just learn one thing and then have this one thing you’ve learned stuck in your head for the entire rest of your life, unable to learn anything new. Your epigenetics change in response to life circumstances throughout your entire life.
“One of the first high-throughput studies of epigenetic differences between monozygotic twins focused in comparing global and locus-specific changes in DNA methylation and histone modifications in a sample of 40 monozygotic twin pairs. In this case, only healthy twin pairs were studied, but a wide range of ages was represented, between 3 and 74 years. One of the major conclusions from this study was that there is an age-dependent accumulation of epigenetic differences between the two siblings of twin pairs. This accumulation suggests the existence of epigenetic “drift”.
In other words, when identical twins are babies, they have very similar epigenetics. As they get older, their epigenetics get more and more different because they have had different experiences out in the world, and their experiences have changed their epigenetics. Your epigenetics change as you age.
Which means that the chances of the exact same epigenetics being passed down from father to child over many generations are essentially zilch.
2. Tons of populations have experienced trauma. If you go back far enough in anyone’s family tree, you can probably find someone who has experienced trauma. My grandparents went through trauma during the Great Depression and WWII. My biological parents were both traumatized as children. So have millions, perhaps billions of other people on this earth. If trauma gets encoded in people’s DNA (or their epigenetics,) then it’s encoded in virtually every person on the face of this planet.
Type 2 Diabetes, Depression, and PTSD are not evenly distributed across the planet. Hell, they aren’t even common in all peoples who have had recent, large oppression events. African Americans have low levels of depression and commit suicide at much lower rates than whites–have white Americans suffered more oppression than black Americans? Whites commit suicide at a higher rate than Indians–have the whites suffered more historical trauma? On a global scale, Israel has a relatively low suicide rate–lower than India’s. Did India recently experience some tragedy worse than the Holocaust? (See yesterday’s post for all stats.)
Type 2 Diabetes reaches its global maximum in Saudia Arabia, Oman, and the UAE, which as far as I know have not been particularly traumatized lately, and is much lower among Holocaust descendants in nearby Israel:
It’s also very low in Sub-Saharan Africa, even though all of the stuff that causes “intergenerational trauma” probably happened there in spades. Have Americans been traumatized more than the Congolese?
This map doesn’t make any sense from the POV of historical trauma. It makes perfect sense if you know who’s eating fatty Waestern diets they aren’t adapted to. Saudia Arabia and the UAE are fucking rich (I bet Oman is, too,) and their population of nomadic goat herders has settled down to eat all the cake they want. The former nomadic lifestyle did not equip them to digest lots of refined grains, which are hard to grow in the desert. Most of Africa (and Yemen) is too poor to gorge on enough food to get Type-2 Diabetes; China and Mongolia have stuck to their traditional diets, to which they are well adapted. Mexicans are probably not adapted to wheat. The former Soviet countries have probably adopted Western diets. Etc., etc.
Why bring up Type-2 Diabetes at all? Well, it appears Indians get Type-2 Diabetes at about the same rate as Mexicans, [Note: PDF] probably for the exact same reasons: their ancestors didn’t eat a lot of wheat, refined sugar, and refined fats, and so they aren’t adapted to the Western diet. (FWIW, White Americans aren’t all that well adapted to the Western Diet, either.)
Everybody who isn’t adapted to the Western Diet gets high rates of diabetes and obesity if they start eating it, whether they had historical trauma or not. We don’t need epigenetic trauma to explain this.
“The researchers found that Native peoples have high rates of ACE’s and health problems such as posttraumatic stress, depression and substance abuse, diabetes all linked with methylation of genes regulating the body’s response to stress. “The persistence of stress associated with discrimination and historical trauma converges to add immeasurably to these challenges,” the researchers wrote.
Since there is a dearth of studies examining these findings, the researchers stated they were unable to conclude a direct cause between epigenetics and high rates of certain diseases among Native Americans.”
There’s a dearth of studies due to it being really immoral to purposefully traumatize humans and then breed them to see if their kids come out fucked up. Luckily for us, (or not luckily, depending on how you look at it,) however, humans have been traumatizing each other for ages, so we can just look at actually traumatized populations. There does seem to be an effect down the road for people whose parents or grandparents went through famines, but, “the effects could last for two generations.”
As horrible as the treatment of the Indians has been, I am pretty sure they didn’t go through a famine two generations ago on the order of what happened when the Nazis occupied the Netherlands and 18-22,000 people starved.
In other words, there’s no evidence of any long-term epigenetic effects large enough to create the effects they’re claiming. As I’ve said, if epigenetics actually acted like that, virtually everyone on earth would show the effects.
The reason they don’t is because epigenetic effects are relatively short-lived. Your epigenetics get re-written throughout your lifetime.
” Researchers such as Shannon Sullivan, professor of philosophy at UNC Charlotte, suggests in her article “Inheriting Racist Disparities in Health: Epigenetics and the Transgenerational Effects of White Racism,” that the science has faint echoes of eugenics, the social movement claiming to improve genetic features of humans through selective breeding and sterilization.”
I’m glad the philosophers are weighing in on science. I am sure philosophers know all about genetics. Hey, remember what I said about citing sources that are actual authorities on the subject at hand? My cousin Bob has all sorts of things to say about epigenetics, but that doesn’t mean his opinions are worth sharing.
The article ends:
“Isolating and nurturing a resilience gene may well be on the horizon.”
How do you nurture a gene?
There are things that epigenetics do. Just not the things people want them to do.
This morning I found a strange, worm-like creature wiggling around in the garden. It was about 5 inches long and thinner than a pin–completely wrong proportions for an earthworm, and further, it was waving its upper end in the air in a manner that earthworms can’t.
I am not worm expert, but my assumption is that there’s only one good reason to be that thin and that long: to make burrowing into someone else’s body easier.
Being a stupid hippie, I wear sandals everywhere (except in snow; I have found it more comfortable to be barefoot in snow than in sandals, and I do not find it comfortable to be barefoot in the snow.)
Forget that shit. I now have nice, sturdy boots for the garden.