The results were striking. Various combinations of height, weight, and head shape were significantly related to 90% of the negative C-BARQ behavioral traits. Further, in nearly all cases, the smaller the dogs, the more problematic behaviors their owners reported. Here are some examples.
Height – Short breeds were more prone to beg for food, have serious attachment problems, be afraid of other dogs, roll in feces, be overly sensitive to touch, defecate and urinate when left alone, and be harder to train. They also were more inclined to hump people’s legs.
So what’s up with small dogs? Let’s run through the obvious factors first:
Culling: Behavioral and psychological problems obviously get bred out of large dogs more quickly. An anxious pug is cute; an anxious doberman is a problem. A chihuahua who snaps at children is manageable; a rottweiler who snaps at children gets put down.
Training: Since behavioral problems are more problematic in larger dogs, their owners (who chose them in the first place,) are stricter from the beginning about problematic behaviors. No one cares if a corgi begs at the dinner table; a St. Bernard who thinks he’s going to eat off your plate gets unmanageable fast.
Rational behaviors: Since small dogs are small, some of the behaviors listed in the article make sense. They pee indoors by accident more often because they have tiny bladders and just need to pee more often than large dogs (and they have to drink more often). They are more fearful because being smaller than everything around them actually is frightening.
Accident of Breeding: Breeding for one trait can cause other traits to appear by accident. For example, breeding for tameness causes changes to animals’ pelt colors, for reasons we don’t yet know. Breeding for small dogs simultaneously breeds for tiny brains, and dogs with tiny brains are stupider than dogs with bigger brains. Stupider dogs are harder to train and may just have more behavioral issues. They may also attempt behaviors (guarding, hunting, herding, etc) that are now very difficult for them due to their size.
Accident of training: people get small dogs and then stick them in doggy carriages, dress them in doggy clothes, and otherwise baby them, preventing them from being properly trained. No wonder such dogs are neurotic.
And finally, That’s not a Bug, it’s a Feature: Small dogs have issues because people want them to.
Small dogs are bred to be companions to people, usually women (often lonely, older women whose children have moved out of the house and don’t call as often as they should). As such, these dogs are bred to have amusing, human-like personalities–including psychological problems.
Lonely people desire dogs that will stay by them, and so favor anxious dogs. Energetic people favor hyperactive dogs. Anti-social people who don’t want to bond emotionally with others get a snake.
There’s an analogy here with other ways people meet their emotional/psychological needs, like Real Dolls and fake babies (aka “reborns”). The “reborn” doll community contains plenty of ordinary collectors and many grieving parents whose babies died or were stillborn and some older folks with Alzheimer’s, as well as some folks who clearly take it too far and enter the creepy territory.
Both puppies and babydolls are, in their way, stand-ins for the real thing (children,) but dogs are also actually alive, so people don’t feel stupid taking care of dogs. Putting your dog in a stroller or dressing it up in a cute outfit might be a bit silly, but certainly much less silly than paying thousands of dollars to do the same thing to a doll.
And unlike dolls, dogs actually respond to our emotions and have real personalities. As John Katz argues, we now use dogs, in effect, for their emotional work:
In an increasingly fragmented and disconnected society, dogs are often treated not as pets, but as family members and human surrogates. The New Work of Dogsprofiles a dozen such relationships in a New Jersey town, like the story of Harry, a Welsh corgi who provides sustaining emotional strength for a woman battling terminal breast cancer; Cherokee, companion of a man who has few friends and doesn’t know how to talk to his family; the Divorced Dogs Club, whose funny, acerbic, and sometimes angry women turn to their dogs to help them rebuild their lives; and Betty Jean, the frantic founder of a tiny rescue group that has saved five hundred dogs from abuse or abandonment in recent years.
Normally we’d call this “bonding,” “loving your dog,” or “having a friend,” but we moderns have to overthink everything and give it fussy labels like “emotional work.” We’re silly, but thankfully our dogs put up with us.
Of course there are smart people who are insane, and dumb people who are completely rational. But if we define intelligence as having something to do with accurately understanding and interpreting the information we constantly receive from the world, necessary to make accurate predictions about the future and how one’s interactions with others will go, there’s a clear correlation between accurately understanding the world and being sane.
In other words, a sufficiently dumb person, even a very sane one, will be unable to distinguish between accurate and inaccurate depictions of reality and so can easily espouse beliefs that sound, to others, completely insane.
Is there any way to distinguish between a dumb person who believes wrong things by accident and a smart person who believes wrong things because they are insane?
Digression: I have a friend who was homeless for many years. Eventually he was diagnosed as mentally ill and given a disability check.
“Why?” he asked, but received no answer. He struggled (and failed) for years to prove that he was not disabled.
Eventually he started hearing voices, was diagnosed with schizophrenia, and put on medication. Today he is not homeless, due at least in part to the positive effects of anti-psychotics.
The Last Psychiatrist has an interesting post (deleted from his blog, but re-posted elsewhere,) on how SSI is determined:
Say you’re poor and have never worked. You apply for Welfare/cash payments and state Medicaid. You are obligated to try and find work or be enrolled in a jobs program in order to receive these benefits. But who needs that? Have a doctor fill out a form saying you are Temporarily Incapacitated due to Medical Illness. Yes, just like 3rd grade. The doc will note the diagnosis, however, it doesn’t matter what your diagnosis is, it only matters that a doctor says you are Temporarily Incapacitated. So cancer and depression both get you the same benefits.
Nor does it matter if he medicates you, or even believes you, so long as he signs the form and writes “depression.”(1) The doc can give you as much time off as he wants (6 months is typical) and you can return, repeatedly, to get another filled out. You can be on state medicaid and receive cash payments for up to 5 years. So as long as you show up to your psych appointments, you’ll can receive benefits with no work obligation.
“That’s not how it works for me”
you might say, which brings us to the whole point: it’s not for you. It is for the entire class of people we label as poor, about whom comic Greg Geraldo joked: “it’s easy to forget there’s so much poverty in the United States, because the poor people look just like black people.” Include inner city whites and hispanics, and this is how the government fights the War On Poverty.
In the inner cities, the system is completely automated. Poor person rolls in to the clinic, fills out the paperwork (doc signs a stack of them at the end of the day), he sees a therapist therapist, a doctor, +/- medications, and gets his benefits.
There’s no accountability, at all. I have never once been asked by the government whether the person deserved the money, the basis for my diagnosis– they don’t audit the charts, all that exists is my sig on a two page form. The system just is.
Enter SSI, Supplemental Security Income. You can earn lifetime SSI benefits (about $600/mo + medical insurance) if “you” can “show” you are “Permanently Disabled” due to a “medical illness.”
“You“= your doc who fills out a packet with specific questions; and maybe a lawyer who processes the massive amounts of other paperwork, and argues your case, and charges about 20% of a year’s award.
“show” has a very specific legal definition: whatever the judge feels like that day. I have been involved in thousands of these SSI cases, and to describe the system as arbitrary is to describe Blake Lively as “ordinary.”
“Permanently disabled” means the illness prevents you from ever working. “But what happens when you get cured?” What is this, the future? You can’t cure bipolar.
“Medical illness” means anything. The diagnosis doesn’t matter, only that “you” show how the diagnosis makes it impossible for you to work. Some diagnoses are easier than others, but none are impossible. “Unable to work” has specific meaning, and specific questions are asked: ability to concentrate, ability to complete a workweek, work around others, take criticism from supervisors, remember and execute simple/moderately difficult/complex requests and tasks, etc.
Fortunately, your chances of being awarded SSI are 100%…
It’s a good post. You should read the whole thing.
TLP’s point is not that the poor are uniformly mentally ill, but that our country is using the disability system as a means of routing money to poor people in order to pacify them (and maybe make their lives better.)
I’ve been playing a bit of sleight of hand, here, subbing in “poor” and “dumb.” But they are categories that highly overlap, given that dumb people have trouble getting jobs that pay well. Despite TLP’s point, many of the extremely poor are, by the standards of the middle class and above, mentally disabled. We know because they can’t keep a job and pay their bills on time.
“Disabled” is a harsh word to some ears. Who’s to say they aren’t equally able, just in different ways?
Living under a bridge isn’t being differently-abled. It just sucks.
Normativity bias happens when you assume that everyone else is just like you. Middle and upper-middle class people tend to assume that everyone else thinks like they do, and the exceptions, like guys who think the CIA is trying to communicate with them via the fillings in their teeth, are few and far between.
As for the vast legions of America’s unfortunates, they assume that these folks are basically just like themselves. If they aren’t very bright, this only means they do their mental calculations a little slower–nothing a little hard work, grit, mindfulness, and dedication can’t make up for. The fact that anyone remains poor, then, has to be the fault of either personal failure (immorality) or outside forces like racism keeping people down.
These same people often express the notion that academia or Mensa are crawling with high-IQ weirdos who can barely tie their shoes and are incapable of socializing with normal humans, to which I always respond that furries exist.
These people need to get out more if they think a guy successfully holding down a job that took 25 years of work in the same field to obtain and that requires daily interaction with peers and students is a “weirdo.” Maybe he wears more interesting t-shirts than a middle manager at BigCorp, but you should see what the Black Hebrew Israelites wear.
I strongly suspect that what we would essentially call “mental illness” among the middle and upper classes is far more common than people realize among the lower classes.
As I’ve mentioned before, there are multiple kinds of intellectual retardation. Some people suffer physical injuries (like shaken baby syndrome or encephalitis), some have genetic defects like Down’s Syndrome, and some are simply dull people born to dull parents. Intelligence is part genetic, so just as some people are gifted with lucky smart genes, some people are visited by the stupid fairy, who only leaves dumb ones. Life isn’t fair.
Different kinds of retardation manifest differently, with different levels of overall impairment in life skills. There are whole communities where the average person tests as mentally retarded, yet people in these communities go providing for themselves, building homes, raising their children, etc. They do not do so in the same ways as we would–and there is an eternal chicken and egg debate about whether the environment they are raised in causes their scores, or their scores cause their environment–but nevertheless, they do.
All of us humans are descended from people who were significantly less intelligent than ourselves. Australopithecines were little smarter than chimps, after all. The smartest adult pygmy chimps, (bonobos) like Kanzi, only know about 3,000 words, which is about the same as a 3 or 4 year old human. (We marvel that chimps can do things a kindergartener finds trivial, like turn on the TV.) Over the past few million years, our ancestors got a lot smarter.
How do chimps think about the world? We have no particular reason to assume that they think about it in ways that substantially resemble our own. While they can make tools and immediately use them, they cannot plan for tomorrow (dolphins probably beat them at planning.) They do not make sentences of more than a few words, much less express complex ideas.
Different humans (and groups of humans) also think about the world in very different ways from each other–which is horrifyingly obvious if you’ve spent any time talking to criminals. (The same people who think nerds are weird and bad at socializing ignore the existence of criminals, despite strategically moving to neighborhoods with fewer of them.)
Even non-criminals communities have all sorts of strange practices, including cannibalism, human sacrifice, wife burning, genital mutilation, coprophagy, etc. Anthropologists (and economists) have devoted a lot of effort to trying to understand and explain these practices as logical within their particular contexts–but a different explanation is possible: that different people sometimes think in very different ways.
For example, some people think there used to be Twa Pygmies in Ireland, before that nefarious St. Patrick got there and drove out all of the snakes. (Note: Ireland did’t have snakes when Patrick arrived.)
(My apologies for this being a bit of a ramble, but I’m hoping for feedback from other people on what they’ve observed.)
This was a remarkable study in two ways. First, it had a sample size of 5,940,778, followed up for 83.9 million person-years–basically, the entire population of Denmark over 15 years. (Big Data indeed.)
Second, it found that for virtually every disorder, one diagnoses increased your chances of being diagnosed with a second disorder. (“Comorbid” is a fancy word for “two diseases or conditions occurring together,” not “dying at the same time.”) Some diseases were particularly likely to co-occur–in particular, people diagnosed with “mood disorders” had a 30% chance of also being diagnosed with “neurotic disorders” during the 15 years covered by the study.
Mood disorders includes bipolar, depression, and SAD;
Neurotic disorders include anxieties, phobias, and OCD.
Those chances were considerably higher for people diagnosed at younger ages, and decreased significantly for the elderly–those diagnosed with mood disorders before the age of 20 had a +40% chance of also being diagnosed with a neurotic disorder, while those diagnosed after 80 had only a 5% chance.
I don’t find this terribly surprising, since I know someone with at least five different psychological diagnoses, (nor is it surprising that many people with “intellectual disabilities” also have “developmental disorders”) but it’s interesting just how pervasive comorbidity is across conditions that are ostensibly separate diseases.
This suggests to me that either many people are being mis-diagnosed (perhaps diagnosis itself is very difficult,) or what look like separate disorders are often actually one, single disorder. While it is certainly possible, of course, for someone to have both a phobia of snakes and seasonal affective disorder, the person I know with five diagnoses most likely has only one “true” disorder that has just been diagnosed and treated differently by different clinicians. It seems likely that some people’s depression also manifests itself as deep-rooted anxiety or phobias, for example.
While this is a bit of a blow for many psychiatric diagnoses, (and I am quite certain that many diagnostic categories will need a fair amount of revision before all is said and done,) autism recently got a validity boost–How brain scans can diagnose Autism with 97% accuracy.
The title is overselling it, but it’s interesting anyway:
Lead study author Marcel Just, PhD, professor of psychology and director of the Center for Cognitive Brain Imaging at Carnegie Mellon University, and his team performed fMRI scans on 17 young adults with high-functioning autism and 17 people without autism while they thought about a range of different social interactions, like “hug,” “humiliate,” “kick” and “adore.” The researchers used machine-learning techniques to measure the activation in 135 tiny pieces of the brain, each the size of a peppercorn, and analyzed how the activation levels formed a pattern. …
So great was the difference between the two groups that the researchers could identify whether a brain was autistic or neurotypical in 33 out of 34 of the participants—that’s 97% accuracy—just by looking at a certain fMRI activation pattern. “There was an area associated with the representation of self that did not activate in people with autism,” Just says. “When they thought about hugging or adoring or persuading or hating, they thought about it like somebody watching a play or reading a dictionary definition. They didn’t think of it as it applied to them.” This suggests that in autism, the representation of the self is altered, which researchers have known for many years, Just says.
N=34 is not quite as impressive as N=Denmark, but it’s a good start.
Note: this is just a theory, developed in reaction to recent conversations.
From Twitter user FinchesofDarwin comes an interesting tale, about a polygynously-married woman in Guiana:
Manwaiko had two wives, and each of these had a family of young children. … Between the two wives and their respective children little kindness seemed to exist. One evening, while the party were squatting on the ground, eating their supper… one of the wives, who with her children had been employed in cutting firewood, discovered, on her return, that the supper for herself and her family was not to be found, having been carried off by some animal through neglect or connivance of her rival. It could hardly be expected that she would sit down quietly without the evening meal for herself and her children… and she accordingly applied to Manwaiko for a share of his allowance, which was ample. He treated her request with contempt… She then commenced a furious torrent of abuse, during which he finished his meal with great composure, until, being irritated at his indifference, she at last told him that he was no “capitan,” no father, and no man. …
Such stormy ebullitions of temper are rare in the Indian families, though, where polygamy is practiced, continual variance and ill-feeling are found.
From The Indian Tribes of Guiana, their Condition and Habits, by Reverend Brett, 1868
As we were discussing Friday, one form of female sociopathy (at least relevant to this conversation) likely involves manipulating or coercing others into providing resources for her children. On Monday we discussed mental illness and its effects on fertility (generally it lowers fertility in men, but depression has little to no effect on women, neuroticism may enhance fertility, and sometimes the sisters of people with mental illnesses have slightly increased fertility, suggesting that low levels of certain traits may be beneficial.)
Here is where I get 100% speculative, and to be frank, I don’t like saying negative things about women (since I am one,) but if men can be sociopaths, then women can, too–and conversely, the majority of men are not sociopaths, and neither are the majority of women.
In the quoted passage, we see two common tropes: First, the evil stepmother, in the form of the wife who let wild animals make off with half of the family’s food. Second, the crazy bitch, who goes on a tirade questioning her husband’s manliness because he has failed to provide food for her children.
In this case, only the first woman is truly sociopathic (she has harmed the other woman and her children,) but we can see how the second’s behavior could easily spill over into unreasonable demands.
Female sociopathy–manipulating men out of their money–only works as an evolutionary strategy in an environment where men themselves vary in their trustworthiness and cannot be easily predicted. If the men in a society can be counted upon to always provide for their offspring, women have no need to try to manipulate them into doing so; if men in a society flat out refuse to do so, then there is no point to trying. Only in a situation where you can affect the amount of resources you get out of a man will there be any point to doing so.
Given the environmental constraints, sociopathic female behavior is likely to increase in reaction to an increase in sociopathic male behavior–that is, when women fear abandonment or an inability to care for their children.
This manipulation has two targets–first, the father of the child, whom the woman wishes to prevent from wandering off and having children with other women, or baring that, from giving them any resources. Second, should this fail, or the male be too violent for women and children to be near, the woman targets a new male to convince him to care for her, her children, and possibly beat the resources out of the old male.
Since children actually do need to eat, and getting enough resources can be tough, society is generally fine with women doing what they need to provide for their families (unlike men doing whatever they need to maximize reproduction, which usually ends with the police informing you that no, you cannot go “Genghis Khan” on Manhattan.)
But at times women really do go overboard, earning the title of “crazy ex.” Here’s part of one woman’s helpful list of why she went crazy:
1. He told me he loved me, then he left me. … I wasn’t going to make it easy for him to leave me. I promised myself I’d fight for my relationship because I loved him and he said he loved me. … 3. If you didn’t know, one of the quickest ways to drive a woman insane is to ignore her. … This was the most severe phase of crazy for me. I was infuriated that not only was I losing my relationship and wasn’t given a reason why, but I was being blatantly ignored by him too! … 4. He told me not to worry about his “friend,” and now he’s dating her.
Back before the invention of birth control, a woman who got dumped like this was most likely pregnant, if not already caring for several children. Abandonment was a big deal, and she had every reason not to just let her partner wander off and start impregnating new chicks.
In our modern world, he made it clear that he didn’t want to be in a relationship anymore and left.
After my ex boyfriend broke up with me I went crazy… After he dumped me for the third time I felt used and devastated. I wanted an explanation and answers. He was a jerk to me. A cruel son of a bitch. I kept begging, calling, and begging. I never got a reply back. This went on for over 3 months. …
This isn’t the only kind of “crazy” I’ve seen around, though.
An acquaintance recently recounted a story about an ex who actually ended up in the mental hospital for suicidal ideation. She listed him as her contact, something he was not exactly keen on, having already told her the relationship was over.
Then there is the phenomenon of people actually claiming to be crazy, often with rather serious disorders that you would not normally think they would want to revealing to others. For example, I have seen several young women claim recently to have Multiple Personality Disorder–a condition that is not in the DSM and so you can no longer get diagnosed with it. Though you can get diagnosed with Disassociative Identity Disorder, this disorder is rare and quite controversial, and I would expect anyone with a real diagnosis to use the real name, just as few schizophrenics claim to have been diagnosed with dementia praecox.
MPD is less of a real disorder and more of a fad spread by movies, TV, and unscrupulous shrinks, though many people who claim to have it are quite genuinely suffering.
(I should emphasize that in most of these cases, the person in question is genuinely suffering.)
Most of these cases–MPD, PTSD, etc–are supposedly triggered by traumatic experiences, such as childhood abuse or spousal abuse. (Oddly, being starved half to death in a POW camp doesn’t seem to trigger MPD.) And yet, despite the severity of these conditions, people I encounter seem to respond positively to these claims of mental illness–if anything, a claim of mental illness seems to get people more support.
So I suggest a potential mechanism:
First, everyone of course has a pre-set range of responses/behaviors they can reasonably call up, but these ranges vary from person to person. For example, I will run faster if my kids are in danger than if I’m late for an appointment, but you may be faster than me even when you’re just jogging.
Second, an unstable, violent, or neglectful environmental triggers neuroticism, which in turn triggers mental instability.
Third, mental instability attracts helpers, who try to “rescue” the woman from bad circumstances.
Fourth, sometimes this goes completely overboard into trying to destroy an ex, convincing a new partner to harm the ex, spreading untrue rumors about the ex, etc. Alternatively, it goes overboard in the woman become unable to cope with life and needing psychiatric treatment/medication.
Since unstable environments trigger mental instability in the first place, sociopathic men are probably most likely to encounter sociopathic women, which makes the descriptions of female sociopathy automatically sound very questionable:
“My crazy ex told all of our friends I gave her gonorrhea!”
“Yeah, but that was after you stole $5,000 from her and broke two of her ribs.”
This makes it difficult to collect objective information on the matter, and is why this post is very, very speculative.
Note: this is just a theory, developed in reaction to recent conversations.
Let us assume, first of all, that men and women have different optimal reproductive strategies, based on their different anatomy. In case you have not experienced birth yourself, it’s a difference of calories, time, and potential death.
In the ancestral environment (before child support laws, abortion, birth control, or infant formula):
For men, the absolute minimal paternal investment in a child–immediate abandonment–involves a few minutes of effort and spoonful of semen. There are few dangers involved, except for the possibility of other males competing for the same female. A hypothetical man could, with very little strain or extra physical effort, father thousands of children–gay men regularly go through the physical motions of doing just that, and hardly seem exhausted by the effort.
For women, the absolute minimal parental investment is nine months of gestation followed by childbirth. This is calorically expensive, interferes with the mother’s ability to take care of her other children, and could kill her. A woman who tried to maximize her pregnancies from menarchy to menopause might produce 25 children.
If a man abandons his children, there is a decent chance they will still survive, because they can be nursed by their mother; if a woman abandons her child, it is likely to die, because its father cannot lactate and so cannot feed it.
In sum, for men, random procreative acts (ie, sex) are extremely low-cost and still have the potential to produce offspring. For women, random procreative acts are extremely costly. So men have an incentive to spread their sperm around and women have an incentive to be picky about when and with whom they reproduce.
This is well known to, well, everyone.
Now, obviously most men do not abandon their children (nor do most women.) It isn’t in their interest to do so. A man’s children are more likely to survive and do well in life if he invests in them. (In a few societies where paternity is really uncertain, men invest resources in their sisters’ children, who are at least related to them, rather than opting out altogether.) As far as I know, some amount of male input into their children or their sisters’ children is a human universal–the only variation is in how much.
Men want to invest in their children because this helps their children succeed, but a few un-tended bastards here and there are not a major problem. Some of them might even survive.
By contrast, women really don’t want to get saddled with bastards.
We may define sociopathy, informally, as attempting to achieve evolutionary ends by means that harm others in society, eg, stealing. In this case, rape and child abandonment are sociopathic ways of increasing men’s reproductive success at the expense of other people. (Note that sociopathy doesn’t have a formal definition and I am using it here as a tool, not a real diagnosis. If someone has a better term, I’m happy to use it.)
This is, again, quite obvious–everyone knows that men are much more likely than women to be imprisoned for violent acts, rape included. Men are also more likely than women to try to skip out on their child support payments.
Note that this “sociopathy” is not necessarily a mental illness, (a true illness ought to make a dent on one’s evolutionary success.) Genghis Khan raped a lot of women, and it turned out great for his genes. It is simply a reproductive strategy that harms other people.
So what does female sociopathy look like?
It can’t look like male sociopathy, because child abandonment decreases a woman’s fertility. For a woman, violence and abandonment would be signs of true mental defects. Rather, we want to look at ways women improve their chances of reproductive success at the expense of others.
In other words, female sociopathy involves manipulating or coercing others into providing resources for her children.
But it’s getting late; let’s continue with part 2 on Monday. (Wednesday is book club.)
I ran across an interesting study today, on openness, creativity, and cortical thickness.
The psychological trait of “openness”–that is, willingness to try new things or experiences–correlates with other traits like creativity and political liberalism. (This might be changing as cultural shifts are changing what people mean by “liberalism,” but it was true a decade ago and is still statistically true today.)
a brain morphometric measure used to describe the combined thickness of the layers of the cerebral cortex in mammalianbrains, either in local terms or as a global average for the entire brain. Given that cortical thickness roughly correlates with the number of neurons within an ontogenetic column, it is often taken as indicative of the cognitive abilities of an individual, albeit the latter are known to have multiple determinants.
“The key finding from our study was that there was a negative correlation between Openness and cortical thickness in regions of the brain that underlie memory and cognitive control. This is an interesting finding because typically reduced cortical thickness is associated with decreased cognitive function, including lower psychometric measures of intelligence,” Vartanian told PsyPost.”
Citizendium explains some of the issues associated with too thin or thick cortexs:
Typical values in adult humans are between 1.5 and 3 mm, and during aging, a decrease (also known as cortical thinning) on the order of about 10 μm per year can be observed . Deviations from these patterns can be used as diagnostic indicators for brain disorders: While Alzheimer’s disease, even very early on, is characterized by pronounced cortical thinning, Williams syndrome patients exhibit an increase in cortical thickness of about 5-10% in some regions , and lissencephalic patients show drastic thickening, up to several centimetres in occipital regions.
Obviously people with Alzheimer’s have difficulty remembering things, but people with Williams Syndrome also tend to be low-IQ and have difficulty with memory.
Of course, the cortex is a big region, and it may matter specifically where yours is thin or thick. In this study, the thinness was found in the left middle frontal gyrus, left middle temporal gyrus, left superior temporal gyrus, left inferior parietal lobule, right inferior parietal lobule, and right middle temporal gyrus.
These are areas that, according to the study’s authors, have previously been shown to be activated during neuroimaging studies of creativity, and so the specific places you would expect to see some kind of anatomical difference in particularly creative people.
Hypothetically, maybe reduced cortical thickness, in some people, makes them worse at remembering specific kinds of experiences–and thus more likely to try new ones. For example, if I remember very strongly that I like Tomato Sauce A, and that I hate Tomato Sauce B, I’m likely to just keep buying A. But if every time I go to the store I only have a vague memory that there was a tomato sauce I really liked, I might just pick sauces at random–eventually trying all of them.
The authors have a different interpretation:
“We believe that the reason why Openness is associated with reduced cortical thickness is that this condition reduces the person’s ability to filter the contents of thought, thereby facilitating greater immersion in the sensory, cognitive, and emotional information that might otherwise have been filtered out of consciousness.”
So, less meta-brain, more direct experience? Less worrying, more experiencing?
The authors note a few problems with the study (for starters, it is hardly a representative sample of either “creative” people nor exceptional geniuses, being limited to people in STEM,) but it is still an interesting piece of data and I hope to see more like it.
If you want to read more about brains, I recommend Kurzweil’s How to Create a Mind, which I am reading now. It goes into some detail on relevant brain structures, and how they work to create memories, recognize patterns, and let us create thought. (Incidentally, the link goes to Amazon Smile, which raises money for charity; I selected St. Jude’s.)
The terms “Conservative” and “Liberal” are much abused, and, I fear, nearly obsolete, but this thread makes use of them anyway due to a lack of good replacements. I utilize them in hope that you will understand my meaning.
Conservatism and Liberalism basically see human nature quite differently:
Conservatives see people as possessing an ultimate inner essence, some inborn quality, be it your soul, nature, or DNA. This you can mold, but cannot fundamentally change. To put it in Christian terms (since most American Conservatives are Christian), through Free Will you can make good, moral, decisions, but you cannot change the fact that you are Fallen; only through an external Salvation-through-Christ can that be changed.
In more mundane terms, through Free Will, or Virtuous Living, you can make the most of your inner essence. For example, even someone who was born dull–an unchangeable state–may be honest, hard working, and follow the advise of smarter people. A person with a tendency toward addiction may work hard to fight that addiction, avoid drugs entirely, and still live virtuously.
In this view, your nature is like clay. You can’t trade it in for wood or steel or sand, but what you do with that clay, whether you turn it into a plate or a vase or sculpture, (or a splat on the ground) is up to you.
By contrast, Liberalism (in its theoretical form) rejects the notion of an “inner self.” You have no inner essence. There is no “you;” only a set of interactions between your body and the rest of society. The identities people use to describe themselves, man or woman, gay or straight, black or white, Christian or not, are all “social constructs” created via your interactions with the rest of society.
Like the Bohr model of the atom, your “inner essence” only exists when observed by others.
For example: suppose a person of 100% sub-Saharan ancestry had a rare skin condition that made him look white. In his daily life, as he went about his business, he would be treated like a “white” person. Suppose, in addition, he had not been raised by a black family (adopted as an infant by a non-black family) and no one ever told him he was genetically black. Would he have any consciousness of himself as a “black” person?
Or note, for example, the liberal reluctance to attribute to people even traits like “smart” or “dumb” (“Oh, those kids just went to really good schools where they had really good teachers, that’s why they did well on that test, and besides, I don’t really believe in IQ.”)
Dig a bit, and you can find people who believe things like “women do worse in sports and weightlifting than men because society has conditioned them to” and “women are shorter than men because society has consistently underfed them for centuries.”
In Liberalism, your self is not like clay, but a point of environmental intersection where all of the things that have ever happened to you or you have perceived happen to meet.
Conservatism contains a kind of optimistic belief that no matter how bad things are, “you” can, by dint of will, “pull yourself up by your bootstraps” and overcome hardships. You can exist separate from the bad things that happened and can create a good life.
Conservatism therefore tends to approach life’s difficulties as a matter of “right living.” How to lead a good life? By doing it right. Clean your room. Be polite. Honor your mother and your father. Don’t covet.
Conservatism’s approach to dealing with problems is to “get over them.” Pretend they don’t exist. In its optimistic form, it believes that this is possible and that you can overcome your problems. (In its less optimistic form, it comes across as an excuse for abandoning people to insurmountable problems.)
Liberalism contains a kind of pessimism that “you” do not exist separate from the bad events of your life, but rather are created by them. “Racism” is an essential part of what creates “black identity” and thus “black people.” While you can “redefine” and “reclaim” identities, you cannot simply “get over” a core part of your own identity. To do so would render yourself blank.
Since Liberalism defines suffering as a core part of who people are, doesn’t tell them to reject it.
Liberalism tends to approach life’s difficulties as a result of the confluence of societal forces that have all impinged upon a single body to produce that difficulty. For example, a rock does not fall off a cliff and hit a passing car simply because the rock contained some internal desire to launch itself off a cliff, but because a confluence of forces (mostly gravity) compelled it downward. Likewise, when people misbehave, it is because of external circumstances that have created that behavior, like historical racism, sexism, malnutrition, bad schools, etc.
The solution is not to encourage “right behavior” (which is impossible) but to change thought patterns so that oppressive thought categories like “black” or “gay” will stop existing.
In other words, if whites can be convinced to stop thinking that race exists, then they will stop being racist against black people, and black people in the future can exist with identities that don’t include racial suffering.
In a slightly less abstract vein, when we ask “Why did psychology heartily endorse so many experiments that have failed to replicate?” many of those experiments conformed to the liberal, environmentalist view of human identity and behavior.
To give a bit of background: Pre-WWII, psychology was quite taken with Freudian notions that people have unconscious or subconscious thoughts and desires. Freudian ideas are hard to quantify and even harder to falsify, and thus test in any kind of rigorous, scientific way (though there are anthropological studies that have attempted this.) Post-war, mainstream psychology went in a different direction–skinnerian behavioralism–but behavioralism is boring because it treats people like black boxes and just looks at outcomes.
Also post-war, psychologists wanted to figure out why people would do things like stuff other humans into ovens and then claim later, “I was just following orders.” Hence the famous Milgram and Stanford Prison Experiments:
The Milgram experiment on obedience to authority figures was a series of social psychologyexperiments conducted by Yale University psychologist Stanley Milgram. They measured the willingness of study participants, men from a diverse range of occupations with varying levels of education, to obey an authority figure who instructed them to perform acts conflicting with their personal conscience. Participants were led to believe that they were assisting an unrelated experiment, in which they had to administer electric shocks to a “learner.” These fake electric shocks gradually increased to levels that would have been fatal had they been real.
As far as I know, the Milgram experiments have replicated relatively well, and so will not be further discussed. The much ballyhooed Stanford Prison experiment, however, has turned out to be much more questionable.
The Stanford Prison Experiment became popular because it purportedly demonstrated that people’s behavior could be radically altered by even minor environmental expectations–in this case, being paid to pretend to be a prison guard for a few days turned people into raging psychopaths who tortured and abused their fellow students (“prisoners”) into mental breakdowns.
In reality, as has now come out, the “guards” were instructed to act violent and mean, and the prisoners were happily playing along, because after all, it was a fake prison:
Some of the experiment’s findings have been called into question, and the experiment has been criticized for unethical and unscientific practices. Critics have noted that Zimbardo instructed the “guards” to exert psychological control over the “prisoners”, and that some of the participants behaved in a way that would help the study, so that, as one “guard” later put it, “the researchers would have something to work with.” The experiment has also been criticized for its small and unrepresentative sample population. Variants of the experiment have been performed by other researchers, but none of these attempts have replicated the results of the SPE.
Psychology is littered with other experiments purporting to prove that the environment has a large effect on how people act and feel in daily life. Take “priming,” the idea that you can change people’s beliefs or behavior via very simple stimuli, eg, people will walk more slowly and shuffle their feet after reading words related to old people; or “power posing,” the idea that you will be more assertive and effective at work and negotiations after adopting a Superman or Wonder Woman type pose in front of the bathroom mirror for a few minutes.
Phrased optimistically, if “you” can be shaped by negative experiences, then “you” can be re-shaped by positive ones.
None of this is replicating.
It’s not that “priming” can’t exist (I’m actually certain that in some form it does, otherwise advertising wouldn’t work, and studies show that advertising probably works,) but that the extreme view assuming that people possess no true inner essence is flawed. A moderately shy person might be able, with the right ritual, to “pump themselves up” and do something they were too shy to do before, like give a presentation or ask for a raise, but a very shy person might find this completely ineffective.
Both people and their circumstances are complicated.
Sometimes people DO react to environmental stimuli, and sometimes people DO overcome tremendous odds. Sometimes people who were abused abuse others, and sometimes they don’t.
The other day I was walking through the garden when I looked down, saw one of these, leapt back, screamed loudly enough to notify the entire neighborhood:
(The one in my yard was insect free, however.)
After catching my breath, I wondered, “Is that a wasp nest or a beehive?” and crept back for a closer look. Wasp nest. I mentally paged through my knowledge of wasp nests: wasps abandon nests when they fall on the ground. This one was probably empty and safe to step past. I later tossed it onto the compost pile.
The interesting part of this incident wasn’t the nest, but my reaction. I jumped away from the thing before I had even consciously figured out what the nest was. Only once I was safe did I consciously think about the nest.
Gazzaniga discusses a problem faced by brains trying to evolve to be bigger and smarter: how do you get more neurons working without taking up an absurd amount of space connecting each and every neuron to every other neuron?
Imagine a brain with 5 connected neurons: each neuron requires 4 connections to talk to every other neuron. A 5 neuron brain would thus need space for 10 total connections.
The addition of a 6th neuron would require 5 new connections; a 7th neuron requires 6 new connections, etc. A fully connected brain of 100 neurons would require 99 connections per neuron, for a total of 4,950 connections.
Connecting all of your neurons might work fine if if you’re a sea squirt, with only 230 or so neurons, but it is going to fail hard if you’re trying to hook up 86 billion. The space required to hook up all of these neurons would be massively larger than the space you can actually maintain by eating.
So how does an organism evolving to be smarter deal with the connectivity demands of increasing brain size?
Human social lives suggest an answer: Up on the human scale, one person can, Dunbar estimates, have functional social relationships with about 150 other people, including an understanding of those people’s relationships with each other. 150 people (the “Dunbar number”) is therefore the amount of people who can reliably cooperate or form groups without requiring any top-down organization.
So how do humans survive in groups of a thousand, a million, or a billion (eg, China)? How do we build large-scale infrastructure projects requiring the work of thousands of people and used by millions, like interstate highways? By organization–that is, specialization.
In a small tribe of 150 people, almost everyone in the tribe can do most of the jobs necessary for the tribe’s survival, within the obvious limits of biology. Men and women are both primarily occupied with collecting food. Both prepare clothing and shelter; both can cook. There is some specialization of labor–obviously men can carry heavier loads; women can nurse children–but most people are generally competent at most jobs.
In a modern industrial economy, most people are completely incompetent at most jobs. I have a nice garden, but I don’t even know how to turn on a tractor, much less how to care for a cow. The average person does not know how to knit or sew, much less build a house, wire up the electricity and lay the plumbing. We attend school from 5 to 18 or 22 or 30 and end up less competent at surviving in our own societies than a cave man with no school was in his, not because school is terrible but because modern industrial society requires so much specialized knowledge to keep everything running that no one person can truly master even a tenth of it.
Specialization, not just of people but of organizations and institutions, like hospitals devoted to treating the sick, Walmarts devoted to selling goods, and Microsoft devoted to writing and selling computer software and hardware, lets society function without requiring that everyone learn to be a doctor, merchant, and computer expert.
Similarly, brains expand their competence via specialization, not denser neural connections.
The smartest people may boast more neurons than those of average intelligence, but their brains have fewer neural connections…
Neuroscientists in Germany recruited 259 participants, both men and women, to take IQ tests and have their brains imaged…
The research revealed a strong correlation between the number of dendrites in a person’s cerebral cortex and their intelligence. The smartest participants had fewer neural connections in their cerebral cortex.
Fewer neural connections overall allows different parts of the brain to specialize, increasing local competence.
All things are produced more plentifully and easily and of a better quality when one man does one thing that is natural to him and does it at the right time, and leaves other things. –Plato, The Republic
The brains of mice, as Gazzinga discusses, do not need to be highly specialized, because mice are not very smart and do not do many specialized activities. Human brains, by contrast, are highly specialized, as anyone who has ever had a stroke has discovered. (Henry Harpending of West Hunter, for example, once had a stroke while visiting Germany that knocked out the area of his brain responsible for reading, but since he couldn’t read German in the first place, he didn’t realize anything was wrong until several hours later.)
I read, about a decade ago, that male and female brains have different levels, and patterns, of internal connectivity. (Here and here are articles on the subject.) These differences in connectivity may allow men and women to excel at different skills, and since we humans are a social species that can communicate by talking, this allows us to take cognitive modality beyond the level of a single brain.
So modularity lets us learn (and do) more things, with the downside that sometimes knowledge is highly localized–that is, we have a lot of knowledge that we seem able to access only under specific circumstances, rather than use generally.
For example, I have long wondered at the phenomenon of people who can definitely do complicated math when asked to, but show no practical number sense in everyday life, like the folks from the Yale Philosophy department who are confused about why African Americans are under-represented in their major, even though Yale has an African American Studies department which attracts a disproportionate % of Yale’s African American students. The mathematical certainty that if any major in the whole school that attracts more African American students, then other majors will end up with fewer, has been lost on these otherwise bright minds.
Yalies are not the only folks who struggle to use the things they know. When asked to name a book–any book–ordinary people failed. Surely these people have heard of a book at some point in their lives–the Bible is pretty famous, as is Harry Potter. Even if you don’t like books, they were assigned in school, and your parents probably read The Cat in the Hat and Green Eggs and Ham to you when you were a kid. It is not that they do not have the knowledge as they cannot access it.
Teachers complain all the time that students–even very good ones–can memorize all of the information they need for a test, regurgitate it all perfectly, and then turn around and show no practical understanding of the information at all.
Richard Feynman wrote eloquently of his time teaching future science teachers in Brazil:
In regard to education in Brazil, I had a very interesting experience. I was teaching a group of students who would ultimately become teachers, since at that time there were not many opportunities in Brazil for a highly trained person in science. These students had already had many courses, and this was to be their most advanced course in electricity and magnetism – Maxwell’s equations, and so on. …
I discovered a very strange phenomenon: I could ask a question, which the students would answer immediately. But the next time I would ask the question – the same subject, and the same question, as far as I could tell – they couldn’t answer it at all! For instance, one time I was talking about polarized light, and I gave them all some strips of polaroid.
Polaroid passes only light whose electric vector is in a certain direction, so I explained how you could tell which way the light is polarized from whether the polaroid is dark or light.
We first took two strips of polaroid and rotated them until they let the most light through. From doing that we could tell that the two strips were now admitting light polarized in the same direction – what passed through one piece of polaroid could also pass through the other. But then I asked them how one could tell the absolute direction of polarization, for a single piece of polaroid.
They hadn’t any idea.
I knew this took a certain amount of ingenuity, so I gave them a hint: “Look at the light reflected from the bay outside.”
Nobody said anything.
Then I said, “Have you ever heard of Brewster’s Angle?”
“Yes, sir! Brewster’s Angle is the angle at which light reflected from a medium with an index of refraction is completely polarized.”
“And which way is the light polarized when it’s reflected?”
“The light is polarized perpendicular to the plane of reflection, sir.” Even now, I have to think about it; they knew it cold! They even knew the tangent of the angle equals the index!
I said, “Well?”
Still nothing. They had just told me that light reflected from a medium with an index, such as the bay outside, was polarized; they had even told me which way it was polarized.
I said, “Look at the bay outside, through the polaroid. Now turn the polaroid.”
“Ooh, it’s polarized!” they said.
After a lot of investigation, I finally figured out that the students had memorized everything, but they didn’t know what anything meant. When they heard “light that is reflected from a medium with an index,” they didn’t know that it meant a material such as water. They didn’t know that the “direction of the light” is the direction in which you see something when you’re looking at it, and so on. Everything was entirely memorized, yet nothing had been translated into meaningful words. So if I asked, “What is Brewster’s Angle?” I’m going into the computer with the right keywords. But if I say, “Look at the water,” nothing happens – they don’t have anything under “Look at the water”!
The students here are not dumb, and memorizing things is not bad–memorizing your times tables is very useful–but they have everything lodged in their “memorization module” and nothing in their “practical experience module.” (Note: I am not necessarily suggesting that thee exists a literal, physical spot in the brain where memorized and experienced knowledge reside, but that certain brain structures and networks lodge information in ways that make it easier or harder to access.)
People frequently make arguments that don’t make logical sense when you think them all the way through from start to finish, but do make sense if we assume that people are using specific brain modules for quick reasoning and don’t necessarily cross-check their results with each other. For example, when we are angry because someone has done something bad to us, we tend to snap at people who had nothing to do with it. Our brains are in “fight and punish mode” and latch on to the nearest person as the person who most likely committed the offense, even if we consciously know they weren’t involved.
Political discussions are often marred by folks running what ought to be logical arguments through status signaling, emotional, or tribal modules. The desire to see Bad People punished (a reasonable desire if we all lived in the same physical community with each other) interferes with a discussion of whether said punishment is actually useful, effective, or just. For example, a man who has been incorrectly convicted of the rape of a child will have a difficult time getting anyone to listen sympathetically to his case.
In the case of white South African victims of racially-motivated murder, the notion that their ancestors did wrong and therefore they deserve to be punished often overrides sympathy. As BBC notes, these killings tend to be particularly brutal (they often involve torture) and targeted, but the South African government doesn’t care:
According to one leading political activist, Mandla Nyaqela, this is the after-effect of the huge degree of selfishness and brutality which was shown towards the black population under apartheid. …
Virtually every week the press here report the murders of white farmers, though you will not hear much about it in the media outside South Africa.In South Africa you are twice as likely to be murdered if you are a white farmer than if you are a police officer – and the police here have a particularly dangerous life. The killings of farmers are often particularly brutal. …
Ernst Roets’s organisation has published the names of more than 2,000 people who have died over the last two decades. The government has so far been unwilling to make solving and preventing these murders a priority. …
There used to be 60,000 white farmers in South Africa. In 20 years that number has halved.
The Christian Science Monitor reports on the measures ordinary South Africans have to take in what was once a safe country to not become human shishkabobs, which you should pause and read, but is a bit of a tangent from our present discussion. The article ends with a mind-bending statement about a borrowed dog (dogs are also important for security):
My friends tell me the dog is fine around children, but is skittish around men, especially black men. The people at the dog pound told them it had probably been abused. As we walk past house after house, with barking dog after barking dog, I notice Lampo pays no attention. Instead, he’s watching the stream of housekeepers and gardeners heading home from work. They eye the dog nervously back.
Great, I think, I’m walking a racist dog.
Module one: Boy South Africa has a lot of crime. Better get a dog, cover my house with steel bars, and an extensive security system.
Module two: Associating black people with crime is racist, therefore my dog is racist for being wary of people who look like the person who abused it.
And while some people are obviously sympathetic to the plight of murdered people, “Cry me a river White South African Colonizers” is a very common reaction. (Never mind that the people committing crimes in South Africa today never lived under apartheid; they’ve lived in a black-run country for their entire lives.) Logically, white South Africans did not do anything to deserve being killed, and like the golden goose, killing the people who produce food will just trigger a repeat of Zimbabwe, but the modes of tribalism–“I do not care about these people because they are not mine and I want their stuff”–and punishment–“I read about a horrible thing someone did, so I want to punish everyone who looks like them”–trump logic.
Who dies–and how they die–significantly shapes our engagement with the news. Gun deaths via mass shootings get much more coverage and worry than ordinary homicides, even though ordinary homicides are far more common. homicides get more coverage and worry than suicides, even though suicides are far more common. The majority of gun deaths are actually suicides, but you’d never know that from listening to our national conversation about guns, simply because we are biased to worry far more about other people killng us than about ourselves.
Similarly, the death of one person via volcano receives about the same news coverage as 650 in a flood, 2,000 in a drought, or 40,000 in a famine. As the article notes:
Instead of considering the objective damage caused by natural disasters, networks tend to look for disasters that are “rife with drama”, as one New York Times article put it4—hurricanes, tornadoes, forest fires, earthquakes all make for splashy headlines and captivating visuals. Thanks to this selectivity, less “spectacular” but often times more deadly natural disasters tend to get passed over. Food shortages, for example, result in the most casualties and affect the most people per incident5 but their onset is more gradual than that of a volcanic explosion or sudden earthquake. … This bias for the spectacular is not only unfair and misleading, but also has the potential to misallocate attention and aid.
There are similar biases by continent, with disasters in Africa receiving less attention than disasters in Europe (this correlates with African disasters being more likely to be the slow-motion famines, epidemics and droughts that kill lots of people, and European disasters being splashier, though perhaps we’d consider famines “splashier” if they happened in Paris instead of Ethiopia.)
From a neuropolitical perspective, I suspect that patterns such as the Big Five personality traits correlating with particular political positions (“openness” with “liberalism,” for example, or “conscientiousness” with “conservativeness,”) is caused by patterns of brain activity that cause some people to depend more or less on particular brain modules for processing.
For example, conservatives process more of the world through the areas of their brain that are also used for processing disgust, (not one of “the five” but still an important psychological trait) which increases their fear of pathogens, disease vectors, and generally anything new or from the outside. Disgust can go so far as to process other people’s faces or body language as “disgusting” (eg, trans people) even when there is objectively nothing that presents an actual contamination or pathogenic risk involved.
Similarly, people who feel more guilt in one area of their life often feel guilt in others–eg, “White guilt was significantly associated with bulimia nervosa symptomatology.” The arrow of causation is unclear–guilt about eating might spill over into guilt about existing, or guilt about existing might cause guilt about eating, or people who generally feel guilty about everything could have both. Either way, these people are generally not logically reasoning, “Whites have done bad things, therefore I should starve myself.” (Should veganism be classified as a politically motivated eating disorder?)
I could continue forever–
Restrictions on medical research are biased toward preventing mentally salient incidents like thalidomide babies, but against the invisible cost of children who die from diseases that could have been cured had research not been prevented by regulations.
America has a large Somali community but not Congolese, (85,000 Somalis vs. 13,000 Congolese, of whom 10,000 hail from the DRC. Somalia has about 14 million people, the DRC has about 78.7 million people, so it’s not due to there being more Somalis in the world,) for no particular reason I’ve been able to discover, other than President Clinton once disastrously sent a few helicopters to intervene in the eternal Somali civil war and so the government decided that we now have a special obligation to take in Somalis.
–but that’s probably enough.
I have tried here to present a balanced account of different political biases, but I would like to end by noting that modular thinking, while it can lead to stupid decisions, exists for good reasons. If purely logical thinking were superior to modular, we’d probably be better at it. Still, cognitive biases exist and lead to a lot of stupid or sub-optimal results.
Forget the Piraha. It appears that most Americans are only vaguely aware of these things called “past” and “future”:
A majority of people now report that George W. Bush, whom they once thought was a colossal failure of a president, whose approval ratings bottomed out at 33% when he left office, was actually good. By what measure? He broke the economy, destabilized the Middle East, spent trillions of dollars, and got thousands of Americans and Iraqis killed.
Apparently the logic here is “Sure, Bush might have murdered Iraqi children and tortured prisoners, but at least he didn’t call Haiti a shithole.” We Americans have standards, you know.
I’d be more forgiving if Bush’s good numbers all came from 18 year olds who were 10 when he left office and so weren’t actually paying attention at the time. I’d also be more forgiving if Bush had some really stupid scandals, like Bill Clinton–I can understand why someone might have given Clinton a bad rating in the midst of the Monica Lewinsky scandal, but looking back a decade later, might reflect that Monica didn’t matter that much and as far as president goes, Clinton was fine.
But if you thought invading Iraq was a bad idea back in 2008 then you ought to STILL think it is a bad idea right now.
Note: If you thought it was a good idea at the time, then it’s sensible to think it is still a good idea.
This post isn’t really about Bush. It’s about our human inability to perceive the flow of time and accurately remember the past and prepare for the future.
I recently texted a fellow mom: Would your kid like to come play with my kid? She texted back: My kid is down for a nap.
What about when the nap is over? I didn’t specify a time in the original text; tomorrow or next week would have been fine.
I don’t think these folks are trying to avoid me. They’re just really bad at scheduling.
People are especially bad at projecting current trends into the future. In a conversation with a liberal friend, he dismissed the idea that there could be any problems with demographic trends or immigration with, “That won’t happen for a hundred years. I’ll be dead then. I don’t care.”
An anthropologist working with the Bushmen noticed that they had to walk a long way each day between the watering hole, where the only water was, and the nut trees, where the food was. “Why don’t you just plant a nut tree near the watering hole?” asked the anthropologist.
“Why bother?” replied a Bushman. “By the time the tree was grown, I’d be dead.”
Of course, the tree would probably only take a decade to start producing, which is within even a Bushman’s lifetime, but even if it didn’t, plenty of people build up wealth, businesses, or otherwise make provisions to provide for their children–or grandchildren–after their deaths.
Likewise, current demographic trends in the West will have major effects within our lifetimes. Between the 1990 and 2010 censuses (twenty years), the number of Hispanics in the US doubled, from 22.4 million to 50.5 million. As a percent of the overall population, they went from 9% to 16%–making them America’s largest minority group, as blacks constitute only 12.6%.
If you’re a Boomer, then Hispanics were only 2-3% of the country during your childhood.
The idea that demographic changes will take a hundred years and therefore don’t matter makes as much sense as saying a tree that takes ten years to grow won’t produce within your lifetime and therefore isn’t worth planting.
Society can implement long term plans–dams are built with hundred year storms and floods in mind; building codes are written with hundred year earthquake risks in mind–but most people seem to exist in a strange time warp in which neither the past nor future really exist. What they do know about the past is oddly compressed–anything from a decade to a century ago is mushed into a vague sense of “before now.” Take this article from the Atlantic on how Micheal Brown (born in 1996,) was shot in 2014 because of the FHA’s redlining policies back in 1943.
I feel like I’m beating a dead horse at this point, but one of the world’s most successful ethnic groups was getting herded into gas chambers in 1943. Somehow the Jews managed to go from being worked to death in the mines below Buchenwald (slave labor dug the tunnels where von Braun’s rockets were developed) to not getting shot by the police on the streets of Ferguson in 2014, 71 years later. It’s a mystery.
I’m focusing here on political matters because they make the news, but I suspect this is a true psychological trait for most people–the past blurs fuzzily together, and the future is only vaguely knowable.
Politically, there is a tendency to simultaneously assume the past–which continued until last Tuesday–was a long, dark, morass of bigotry and unpleasantness, and that the current state of enlightened beauty will of course continue into the indefinite future without any unpleasant expenditures of effort.
In reality, our species is, more or less, 300,000 years old. Agriculture is only 10,000 years old.
100 years ago, the last great bubonic plague epidemic (yersinia pestis) was still going on. 10 million people died, including 119 Californians. 75 years ago, millions of people were dying in WWII. Sixty years ago, polio was still crippling children (my father caught it, suffering permanent nerve damage.)
100 years ago, only one city in the US–Jersey City–routinely disinfected its drinking water. (Before disinfection and sewers, drinking water was routinely contaminated with deadly bacteria like cholera.) I’m still looking for data on the spread of running water, but chances are good your grandparents did not have an indoor toilet when they were children. (I have folks in my extended family who still have trouble when the water table drops and their well dries up.)
Hunger, famines, disease, death… I could continue enumerating, but my point is simple: the prosperity we enjoy is not only unprecedented in the course of human history, but it hasn’t even existed for one full human lifetime.
Rome was once an empire. In the year one hundred, the eternal city had over 1,500,000 citizens. By 500, it had fewer than 50,000. It would not recover for over a thousand years.
Everything we have can be wiped away in another human lifetime if we refuse to admit that the future exists.
Most of the activities our ancestors spent the majority of their time on have been automated or largely replaced by technology. Chances are good that the majority of your great-great grandparents were farmers, but few of us today hunt, gather, plant, harvest, or otherwise spend our days physically producing food; few of us will ever build our own houses or even sew our own clothes.
Evolution has (probably) equipped us with neurofeedback loops that reward us for doing the sorts of things we need to do to survive, like hunt down prey or build shelters (even chimps build nests to sleep in,) but these are precisely the activities that we have largely automated and replaced. The closest analogues to these activities are now shopping, cooking, exercising, working on cars, and arts and crafts. (Even warfare has been largely replaced with professional sports fandom.)
Society has invented vicarious thrills: Books, movies, video games, even roller coasters. Our ability to administer vicarious emotions appears to be getting better and better.
And yet, it’s all kind of fake.
Exercising, for example, is in many ways a pointless activity–people literally buy machines so they can run in place. But if you have a job that requires you to be sedentary for most of the day and don’t fancy jogging around your neighborhood after dark, running in place inside your own home may be the best option you have for getting the post-running-down prey endorphin hit that evolution designed you to crave.
A sedentary lifestyle with supermarkets and restaurants deprives us of that successful-hunting endorphin hit and offers us no logical reason to go out and get it. But without that exercise, not only our physical health, but our mental health appears to suffer. According to the Mayo Clinic, exercise effectively decreases depression and anxiety–in other words, depression and anxiety may be caused in part by lack of exercise.
So what do we do? We have to make up some excuse and substitute faux exercise for the active farming/gardening/hunting/gathering lifestyles our ancestors lived.
Overall, the number of Americans on medications used to treat psychological and behavioral disorders has substantially increased since 2001; more than one‐in‐five adults was on at least one
of these medications in 2010, up 22 percent from ten years earlier. Women are far more likely to take a drug to treat a mental health condition than men, with more than a quarter of the adult female population on these drugs in 2010 as compared to 15 percent of men.
Women ages 45 and older showed the highest use of these drugs overall. …
The trends among children are opposite those of adults: boys are the higher utilizers of these medications overall but girls’ use has been increasing at a faster rate.
This is mind-boggling. 1 in 5 of us is mentally ill, (supposedly,) and the percent for young women in the “prime of their life” years is even higher. (The rates for Native Americans are astronomical.)
Lack of exercise isn’t the only problem, but I wager a decent chunk of it is that our lives have changed so radically over the past 100 years that we are critically lacking various activities that used to make us happy and provide meaning.
Take the rise of atheism. Irrespective of whether God exists or not, many functions–community events, socializing, charity, morality lessons, etc–have historically been done by religious groups. Atheists are working on replacements, but developing a full system that works without the compulsion of religious belief may take a long while.
Sports and video games replace war and personal competition. TV sitcoms replace friendship. Twitter replaces real life conversation. Politics replace friendship, conversation, and religion.
There’s something silly about most of these activities, and yet they seem to make us happy. I don’t think there’s anything wrong with enjoying knitting, even if you’re making toy octopuses instead of sweaters. Nor does there seem to be anything wrong with enjoying a movie or a game. The problem comes when people get addicted to these activities, which may be increasingly likely as our ability to make fake activities–like hyper-realistic special effects in movies–increases.
Given modernity, should we indulge? Or can we develop something better?