Does the DSM need to be re-written?

I recently came across an interesting paper that looked at the likelihood that a person, once diagnosed with one mental disorder, would be diagnosed with another. (Exploring Comorbidity Within Mental Disorders Among a Danish National Population, by Oleguer Plana-Ripoll.)

This was a remarkable study in two ways. First, it had a sample size of 5,940,778, followed up for 83.9 million person-years–basically, the entire population of Denmark over 15 years. (Big Data indeed.)

Second, it found that for virtually every disorder, one diagnoses increased your chances of being diagnosed with a second disorder. (“Comorbid” is a fancy word for “two diseases or conditions occurring together,” not “dying at the same time.”) Some diseases were particularly likely to co-occur–in particular, people diagnosed with “mood disorders” had a 30% chance of also being diagnosed with “neurotic disorders” during the 15 years covered by the study.

Mood disorders includes bipolar, depression, and SAD;

Neurotic disorders include anxieties, phobias, and OCD.

Those chances were considerably higher for people diagnosed at younger ages, and decreased significantly for the elderly–those diagnosed with mood disorders before the age of 20 had a +40% chance of also being diagnosed with a neurotic disorder, while those diagnosed after 80 had only a 5% chance.

I don’t find this terribly surprising, since I know someone with at least five different psychological diagnoses, (nor is it surprising that many people with “intellectual disabilities” also have “developmental disorders”) but it’s interesting just how pervasive comorbidity is across conditions that are ostensibly separate diseases.

This suggests to me that either many people are being mis-diagnosed (perhaps diagnosis itself is very difficult,) or what look like separate disorders are often actually one, single disorder. While it is certainly possible, of course, for someone to have both a phobia of snakes and seasonal affective disorder, the person I know with five diagnoses most likely has only one “true” disorder that has just been diagnosed and treated differently by different clinicians. It seems likely that some people’s depression also manifests itself as deep-rooted anxiety or phobias, for example.

While this is a bit of a blow for many psychiatric diagnoses, (and I am quite certain that many diagnostic categories will need a fair amount of revision before all is said and done,) autism recently got a validity boost–How brain scans can diagnose Autism with 97% accuracy.

The title is overselling it, but it’s interesting anyway:

Lead study author Marcel Just, PhD, professor of psychology and director of the Center for Cognitive Brain Imaging at Carnegie Mellon University, and his team performed fMRI scans on 17 young adults with high-functioning autism and 17 people without autism while they thought about a range of different social interactions, like “hug,” “humiliate,” “kick” and “adore.” The researchers used machine-learning techniques to measure the activation in 135 tiny pieces of the brain, each the size of a peppercorn, and analyzed how the activation levels formed a pattern. …

So great was the difference between the two groups that the researchers could identify whether a brain was autistic or neurotypical in 33 out of 34 of the participants—that’s 97% accuracy—just by looking at a certain fMRI activation pattern. “There was an area associated with the representation of self that did not activate in people with autism,” Just says. “When they thought about hugging or adoring or persuading or hating, they thought about it like somebody watching a play or reading a dictionary definition. They didn’t think of it as it applied to them.” This suggests that in autism, the representation of the self is altered, which researchers have known for many years, Just says.

N=34 is not quite as impressive as N=Denmark, but it’s a good start.

Advertisements

Learning in Numbers

There is strength in numbers, but is there wisdom?

I’ve heard from multiple sources the claim that parenting, paradoxically, gets easier after the fourth child. There are several simple explanations for this phenomenon: people get more skilled at parenting after lots of practice; the older kids start helping out with the younger ones, etc.

But what if the phenomenon rests on something much more basic about human psychology–our desire to imitate others?

(Perhaps you don’t, dear reader. There are always exceptions.)

As Aristotle put it, man is a political animal–by which he meant that we are inherently social and prone to building communities (polities) together, not that we are inherently prone to arguing about who should govern North Carolina, though that may be political, too. In Aristotle’s words, a man who lives entirely alone is either a beast (living like an animal) or a god (able to fulfill all of his own needs without recourse to other humans.) Normal humans depend in many ways on other humans.

Compared to our pathetic ability to learn math (just look at most people’s SAT-math scores) and inability to read without direct instruction, humans learn socially-imparted skills like the ability to speak multiple languages, play games, assert dominance over each other, which clothes are fashionable, and how to crack a socially-appropriate joke with ease.

Social learning comes so naturally to people that we only notice it in cases of extreme deficit–like autism–or when parents protest that their children are becoming horribly corrupted by their peers.

So perhaps households with more than 4 children have hit a threshold beyond which social learning takes over and the younger children simply seem to “absorb” knowledge from their older siblings instead of having to be explicitly taught.

Consider learning to eat, a hopefully simple task. We are born with instincts to nurse, put random things in our mouths, and swallow. Preventing babies from eating random non-food objects is a bit of a problem for new parents. But learning things like “how to get this squishy food into your mouth with a spoon without also getting it everywhere else in the room” is much more complicated–and humans take food rituals to much more complicated heights than strained peas and carrots.

Parents of new children put a great deal of effort into teaching them to eat (something that ought to be an instinct.) Those with means puree fresh veggies, chop bits of meat, show a sudden interest in organics, and sit down to spoon every single last bit into their infants’ mouths. It is as if they are convinced that kids cannot learn to eat without at least as much instruction as a student learning to wield a welding torch. (And based on my own experience, they’re probably right.)

By contrast, parents of multiple children have–by necessity–relaxed. As a popular comic once depicted (though I can’t find it now,) feeding at this point becomes throwing Cheerios at the highchair as you run by.

Yet I’ve never seen any evidence that the younger children in large families are likely to be malnourished–they seem to catch the Cheerios on the fly and do just fine.

What if imitation is a strong factor in larger families, allowing infants and young children to learn skills like “how to eat” without needing direct parental instruction just by watching their older siblings? You might object that even infants in single parent households could learn to eat by imitating their parents (and they probably do,) but having more people around probably enforces the behavior more strongly, and having younger children around gives an example that is much more similar to the infant. We adults are massive compared to children, after all.

If basic learning of life skills proceeds more easily in an environment with more peers,(for infants or adults,) then what effects should we expect from our current trend toward extreme atomization?

I recently came across an essay about life in a trailer park vs sturdier housing:

To me, growing up in that trailer park meant playing until dark with neighborhood kids, building tree houses and snow forts. Listening out my bedroom window for the sound of my dad’s pickup truck leaving for work in the early morning. Riding my bike down the big hill at the top of the lot, avoiding potholes and feeling safe because there wasn’t much traffic and if I fell and skinned my knee, someone would come out on their front porch and ask if I was okay.

Some of the only happy memories I have of my childhood were from that time in my life, before my parents were thrust into insurmountable debt, before my mother was hospitalized, before I had to go live with my grandmother. Nana had a real house. She didn’t live in a trailer. But when she would scream at me or try to attack me as I squeezed by her and fled upstairs, I wished I had neighbors close by to hear her — to believe me, and to perhaps even help.

The most dysfunctional and unstable years of my life were spent in a real house, with four walls and a slanted roof — where fences went up between the houses so that no one ever had to feel responsible for what went on behind their neighbor’s front door.

This is more about atomization than learning, but still interesting. Is it good for humans to be so far apart? To live far from relatives, in houses with thick walls, as single children or single adults, working and commuting every day among strangers?

Certainly the downsides of being among relatives are well-documented. Many tribal societies have downright cruel customs directed at relatives, like sati or adult circumcision. But that doesn’t mean that the extreme opposite–total atomization–is perfect. Atomization carries other risks. Among them, staying indoors and not socializing with our neighbors may cause us to lose some of our social knowledge, our ability to learn how to exist together.

We might expect that physical atomization due to technological change (sturdier houses, more entertaining TV, comfier climate control systems,) could cause symptoms in people similar to those caused by medical deficits in social learning, like autism. A recent study on the subject found an interesting variation between the brains of normies and autists:

So great was the difference between the two groups that the researchers could identify whether a brain was autistic or neurotypical in 33 out of 34 of the participants—that’s 97% accuracy—just by looking at a certain fMRI activation pattern. “There was an area associated with the representation of self that did not activate in people with autism,” Just says. “When they thought about hugging or adoring or persuading or hating, they thought about it like somebody watching a play or reading a dictionary definition. They didn’t think of it as it applied to them.” This suggests that in autism, the representation of the self is altered, which researchers have known for many years, Just says.

This might explain the high rates of body dysmorphias in autism. It might also explain the high rates in society.

I remember another study which I read ages ago which found that people basically thought about “God” in the same parts of their brain where they thought about themselves. This explains why God tends to have the same morals as His believers. If autists have trouble imagining themselves, then they may also have trouble imagining God–and this might explain rising atheism rates.

Even our rising autism rates, though probably driven primarily by shifts in diagnostic fads, might be influenced by shrinking families and greater atomization, as kids with borderline conditions might show more severe symptoms if they are also more isolated.

On the other hand, social media is allowing people to come together and behave socially in new and ever larger groups.

For all their weaknesses, autists are probably better at normies at certain kinds of tasks, like abstract reasoning where you don’t want to think too much about yourself. I have long suspected that normies balk at philosophical dilemmas such as the trolley problem because they over-empathize with the subjects. Imagining themselves as one of the victims of the runaway trolley causes them distress, and distress causes them to attack the person causing them distress–the philosopher.

And so the citizens of Athens condemned Socrates to death.

But just as people can overcome their natural and very sensible fear of heights in order to work on skyscrapers, perhaps they can train themselves not to empathize with the subjects of trolley problems. Spending time on problems with no human subjects (such as mathematics or engineering) may also help people practice ways of approaching problems that don’t immediately resort to imagining themselves as the subject. On the converse, perhaps a bit of atomization (as seen historically in countries like Britain and France, and recently AFAIK in Japan,) helps equip people to think about difficult, non-human related mathematical or engineering problems.

Thoughts?

Cyborg Dreams: Alita Review with Spoilers

220px-battle_angel_alita_28issue_1_-_cover29This is a review for Alita: Battle Angel, now out in theaters. If you want the review without spoilers, scroll down quickly to the previous post.

It is difficult for any movie to be truly deep. Is Memento deep, or does it just use a backwards-narrative gimmick? Often meaning is something we bring to movies–we interpret them based on our own experiences.

What is the point of cyborgs? They are the ultimate fusion of man and machine. Our technology doesn’t just serve us; it has become us.  What are we, then? Are cyborgs human, or more than human? And what of the un-enhanced meatsacks left behind?

Throughout the movie, we see humans with various levels of robotic enhancement, from otherwise normal people with an artificial limb to monstrous brawlers that are almost unrecognizable as human. Alita is a complete cyborg whose only “human” remain left is her biological brain (perhaps she has a skull, too.) The rest of her, from heart to toes, is machine, and can be disassembled and replaced as necessary.

The graphic novels go further than Alita–in one case, a whole community breaks down after it discovers that the adults have had their brains replaced with computer chips. Can a “human” have a metal body but a meat brain? Can a “human” have a meat body but a computer brain? Alita says yes, that humanity is more than just the raw material we are built of.

(It also goes much less–is Ido’s jet-powered hammer that he uses in battle any different from a jet-powered hammer built into your arm? Does it matter whether you can put the technology down and pack it into a suitcase at the end of the day, or if it is built into your core?)

Yet cyborgs in Alita’s world, despite their obvious advantage over mere humans in terms of speed, reflexes, strength, and ability to switch your arms out for power saws, are mostly true to their origin as disabled people whose bodies were replaced with artificial limbs. Alita’s first body, given to her at the beginning of the movie after she is found without one, was originally built for a little girl in a wheelchair. She reflects to a friend that she is now fast because the little girl’s father built her a fast pair of legs so she could finally run.

The upper class–to the extent that we see them–has no obvious enhancements. Indeed, the most upper class family we meet in the movie, which originally lived in the floating city of Tiphares (Zalem in the movie) was expelled from the city and sent down to the scrap yard with the rest of the trash because of their disabled daughter–the one whose robotic body Alita inherits.

Hugo is an ordinary meat boy with what we may interpret as a serious prejudice against cyborgs–though he comes across as a nice lad, he moonlights as a thief who who kidnaps cyborgs and chops off their body parts for sale on the black market. Hugo justifies himself by claiming he “never killed anyone,” which is probably true, but the process certainly hurts the cyborgs (who cry out in pain as their limbs are sawed off,) and leaves them lying disabled in the street.

Hugo isn’t doing it because he hates cyborgs, though. They’re just his ticket to money–the money he needs to get to Tiphares/Zalem. For even though it is said that no one in the Scrap Yard (Iron City in the movie) is ever allowed into Tiphares, people still dream of Heaven. Hugo believes a notorious fixer named Vector can get him into Tiphares if he just pays him enough money.

Some reviewers have identified Vector as the Devil himself, based on his line, “Better to reign in Hell than serve in Heaven,” which the Devil speaks in Milton’s Paradise Lost–though Milton is himself reprising Achilles in the Odyssey, who claims, “By god, I’d rather slave on earth for another man / some dirt-poor tenant farmer who scrapes to keep alive / than rule down here over all the breathless dead.” 

Yet the Scrap Yard is not Hell. Hell is another layer down; it is the sewers below the Scrap Yard, where Alita’s first real battle occurs. The Scrap Yard is Purgatory; the Scrap Yard is Earth, suspended between both Heaven and Hell, from which people can chose to arise (to Tiphares) or descend (to the sewers.) But whether Tiphares is really Heaven or just a dream they’ve been sold remains to be seen–for everyone in the Scrap Yard is fallen and none may enter Heaven.

Alita, you probably noticed, descended into Hell to fight an evil monster–in the manga, because he kidnapped a baby; in the movie because he was trying to kill her. In the ensuing battle, she is crushed and torn to pieces, sacrificing her final limb to drill out the monster’s eye. Her unconscious corpse is rescued by her friends, dragged back to the surface, and then rebuilt with a new body.

“I do not stand by in the presence of evil”–Alita

But let me reveal to you a wonderful secret. We will not all die, but we will all be transformed! 52 It will happen in a moment, in the blink of an eye, when the last trumpet is blown. For when the trumpet sounds, those who have died will be raised to live forever. And we who are living will also be transformed. 53 For our dying bodies must be transformed into bodies that will never die; our mortal bodies must be transformed into immortal bodies.” 1 Corinthians, 15 

Alita has died and been resurrected. Whether she will ascend into Heaven remains a matter for the sequel. (She does. Obviously.)

Through his relationship with Alita (they smooch), Hugo realizes that cyborgs are people, too, and maybe he shouldn’t chop them up for money.  “You are more human than anyone I know,” he tells her.

Alita, in a scene straight from The Last Temptation of Christ, offers Hugo her heart–literally–to sell to raise the remaining money he needs to make it to Tiphares.

Hugo, thankfully, declines the offer, attempting to make it to Tiphares on his own two feet (newly resurrected after Alita saves his life by literally hooking him up to her own life support system)–but no mere mortal can ascend to Tiphares; even giants may not assault the gates of Heaven.

The people of the Scrap Yard are fallen–literally–from Tiphares, their belongings and buildings either relics from the time before the fall or from trash dumped from above. There is hope in the Scrap Yard, yet the Scrap Yard generates very little of its own, explaining its entirely degraded state.

This is a point where the movie fails–the set builders made the set too nice. The Scrap Yard is a decaying, post-apocalyptic waste filled with strung-out junkies and hyper-violent-TV addicts. In one scene in the manga, Doc Ido, injured, collapses in the middle of a crowd while trying to drag the remains of Alita’s crushed body back home so he can fix her. Bleeding, he cries out for help–but the crowd, entranced by the story playing out on the screens around them, ignores them both.

In the movie, the Scrap Yard has things like oranges and chocolate–suggesting long-distance trade and large-scale production–things they really shouldn’t be able to do. In the manga, the lack of police makes sense, as this is a society with no ability to cooperate for the common good. Since the powers that be would like to at least prevent their own deaths at the hands of murderers, the Scrap Yard instead puts bounties on the heads of criminals, and licensed “Hunter Warriors” decapitate them for money.

(A hunter license is not difficult to obtain. They hand them out to teenage girls.)

Here the movie enters its discussion of Free Will.

Alita awakes with no memory of her life before she became a decapitated head sitting in a landfill. She has the body of a young teen and, thankfully, adults willing to look out for her as she learns about life in Iron City from the ground up–first, that oranges have to be peeled; second, that cars can run you over.

The movie adds the backstory about Doc Ido’s deceased, disabled daughter for whom he built the original body that he gives to Alita. This is a good move, as it makes explicit a relationship that takes much longer to develop in the manga (movies just don’t have the same time to develop plots as a manga series spanning decades.) Since Alita has no memory, she doesn’t remember her own name (Yoko). Doc therefore names her “Alita,” after the daughter whose body she now wears.

As an adopted child myself, I feel a certain kinship with narratives about adoption. Doc wants his daughter back. Alita wants to discover her true identity. Like any child, she is growing up, discovering love, and wants different things for her life than her father does.

Despite her amnesia, Alita has certain instincts. When faced with danger, she responds–without knowing how or why–with a sudden explosion of violence, decapitating a cyborg that has been murdering young women in her neighborhood. Alita can fight; she is extremely skilled in an advanced martial art developed for cyborgs. In short, she is a Martian battle droid that has temporarily mistaken itself for a teenage girl.

She begs Ido to hook her up to a stronger body (the one intended for his daughter was not built with combat in mind,) but he refuses, declaring that she has a chance to start over, to become something totally new. She has free will. She can become anything–so why become a battle robot all over again?

But Alita cannot just remain Doc’s little girl. Like all children, she grows–and like most adopted children, she wants to know who she is and where she comes from. She is good at fighting. This is her only connection to her past, and as she asserts, she has a right to that. Doc Ido has no right to dictate her future.

What is Alita? As far as she knows, she is trash, broken refuse literally thrown out through the Tipharean rubbish chute. The worry that you were adopted because you were unwanted by your biological parents–thrown away–plagues many adopted children. But as Alita discovers, this isn’t true. She’s not trash–she’s an alien warrior who once attacked Earth and ended up unconscious in the scrap yard after losing most of her body in the battle. Like the Nephilim, she is a heavenly battle angel who literally fell to Earth.

By day, Ido is a doctor, healing people and fixing cyborgs. By night, he is a Hunter Warrior, killing people. For Ido, killing is expression of rage after his daughter’s death, a way of channeling a psychotic impulse into something that benefits society by aiming it at people even worse than himself. But for Alita, violence serves a greater purpose–she uses her talent to eliminate evil and serve justice. Alita’s will is to protect the people she loves.

After Alita runs away, gets in a fight, descends into Hell, and is nearly completely destroyed, Doc relents and attaches her to a more powerful, warrior body. He recognizes that time doesn’t freeze and he cannot keep Alita forever as his daughter (a theme revisited later in the manga when Nova tries to trap Alita in an alternative-universe simulation where she never becomes a Hunter Warrior.

In an impassioned speech, Nova declares, “I spit upon the second law of thermodynamics!” He wants to freeze time; prevent decay. But even Nova, as we have seen, cannot contain Alita’s will. She knows it is a simulation. She plays along for a bit, enjoying the story, then breaks out.

Alita’s new body uses “nanotechnology,” which is to say, magic, to keep her going. Indeed, the technology in the movie is no more explained than magic in Harry Potter, other than some technobabble about how Alita’s heart contains a miniature nuclear reactor that could power the whole city, which is how she was able to stay alive for 300 years in a trash heap.

With her more powerful body, Alita is finally able to realize herself.

Alita’s maturation from infant (a living head completely unable to move,) to young adult is less explicit in the movie than in the manga, but it is still there–with the reconfiguration of her new body based on Alita’s internal self-image, Doc discovers that “She is a bit older than you thought she was.” In a dream sequence in the original, the metaphors are made explicit–limbless Alita in one scene becomes an infant strapped to Doc’s back as he roots through the dump for parts. Then she receives a pair of arms, and finally legs, turning into a toddler and a girl. Finally, with her berserker body, she achieves adulthood.

But with all of this religious imagery, is Tiphares really heaven? Of course not–if it were, why would Nova–who is the true villain trying to kill her–live there? There was a war in the Heavens–but the Heavens are far beyond Tiphares. Alita will escape Purgatory and ascend to Tiphares–and unlike the others, she will not do it by being chopped into body parts for Nova’s experiments.

For the mind is its own place, and can make a Heaven of Hell, and a Hell of Heaven.

Tiphares is only the beginning, just as the Scrap Yard is not the Hell we take it for.

Review: Battle Angel Alita 5/5 stars

mv5bnzvhmjcxyjytotvhos00mzq1lwfintatzmy2zmjjnjixmjllxkeyxkfqcgdeqxvyntc5otmwotq40._v1_ux182_cr00182268_al_I have not seen a movie aimed at adults, in an actual theater, in over a decade. Alita: Battle Angel broke my movie fast because I was a huge fan of the manga.

It was marvelous.

I can’t judge the movie from the perspective of someone who has seen every last Marvel installment, nor one who hasn’t read the manga. But it is visually stunning, with epic battle scenes and a philosophical core.

What does it mean to be human? Can robots be human? What about humanoid battle cyborgs? Alita is simultaneously human–a teenage girl searching for her place in this world–and inhuman–a devastating battle droid.

I don’t want to give away too many spoilers, so I’ll showcase the trailer:

Yes, she has giant anime eyes. You get used to it quickly.

I saw it in 3D, which was amazing–the technology we have for making and distributing movies in general is amazing, but this is a film whose action sequences really stand out in the medium.

The story is basically true to its manga inspiration, though there are obvious changes. The original story is much too long for a single movie, for example, and the characters often paused in the middle of battle for philosophical conversations. The movie lets the philosophy hang more in the background, (even skipping the Nietzsche.)

The movie’s biggest weakness was the main set, which was just too pleasant looking to be as gritty as the characters regarded it. There are a few other world-building inconsistencies, but nothing on the scale of “Why didn’t the giant eagles just fly the ring to Mordor?” or “how does money work in Harry Potter?”

51aam32ywvlThe movie has no shoe-horned-in political agenda–Alita never stops to whine about how women are treated in Iron City, for example, she just explodes with family-protecting violence. The plot is structured around class inequality, but this is a fairly believable backdrop since class is a real thing we all deal with in the real world. The movie does feature the “tiny girl who can beat up big bad guys” trope, but then, she is a battle droid made of metal, so Alita’s fighting skills make more sense than, say, River Tam’s.

Unfortunately, there are a few lose ends that are clearly supposed to carry into a sequel, which may not happen if all of the nay-sayers get their way. This makes the movie feel a touch unfinished–the story isn’t over.

So what’s with all of the bad reviews?

Over on Rotten Tomatoes, the critics gave the movie a 60% rating, while the movie going public has given it a 93% rating. That’s quite the split. Perhaps there are some movies that critics just don’t get, but certain fans love. But I note that other superhero movies, like Iron Man and Guardians of the Galaxy, received quite good reviews, despite the fact that pretty much all superhero movies are absurd if you think about them for too long. (GotG stars a raccoon, for goodness sake.)

Overall, if you like superhero/action movies, you will probably like Alita.

So why did I like the manga so much?

In part, it was just timing–I had a Japanese friend and we liked to hang out and watch anime together. In part it was artistic–Alita is a lovely character, and as a young female, I was smitten both with her cyborg good looks and the fact that she looks more like me than most superheroes. I spent much of my youth drawing cyborg girls of my own. Beyond that, it’s hard to say–sometimes you just like something.

What about you? Seen anything good, lately?

Phase Change and Revolutions

Phase changes don’t usually happen instantly, like in the video, but they are sudden from the perspective of temperature. You don’t see a few ice crystals forming at 40 degrees, a few large chunks of ice at 38, the water halfway frozen at 34, and the whole thing solid at 32. No, at 34 degrees, water is liquid. Water is a liquid all the way from 100 to 33 degrees, and then suddenly, without warning, it transforms at 32 (even if it takes a little time.) By 31.9, it’s a solid chunk.

One of the enduring mysteries of political science is “Why did no one in political science predict the fall of the Soviet Union?” One of the other enduring mysteries of political science is “Why on earth did the Soviet Union fall when it did? Why not earlier–or later?”

Political regimes don’t fall very often. We can look around the world today and see a number of repressive states–North Korea, Venezuela, Iran–that don’t look like they’re doing a very good job of taking care of their citizens, yet their governments stay firmly in power. Why don’t these regimes fall? Or will they–someday?

I propose that regime change is much like phase changes–difficult to predict because they simply cannot happen before a specific point, and they happen so rarely that we don’t have enough data to test exactly which conditions are necessary to make them occur, much less figure out whether those conditions currently exist within a foreign society.

There are probably two main things necessary for something like the fall of the Soviet Union:

First, a majority of the people with guns–the armed forces in most countries, but a lot of civilians in the US–need to stop believing in the regime.

Second, the majority that no longer believes in the regimes’ legitimacy has to know that it is a majority.

Since opposing the regime will usually get you shot, no one wants to be the first guy to say that he doesn’t believe in the regime. Since opposing the regime will get you shot, even people who oppose the regime will go ahead and shoot comrades who have opposed the regime in fear that if they don’t, they will also be shot.

100% of people in a system can oppose the regime and the regime will still keep charging on, shooting dissenters, if no one knows that everyone else is also opposed to the regime.

So how does regime change actually happen?

First, you need crazy people willing to charge, like Don Quixote, at windmills and regimes. These people will usually get shot, which is why they need to be crazy. But if enough people have already decided that the regime is not particularly legitimate, there is a possibility that one of them will decide to be lenient. They will quietly decide not to shoot the revolutionary.

The fall of the Berlin Wall happened almost by accident–new regulations were passed regarding round-trip travel in the Soviet Union and this was read aloud on the radio in a way that made it sound like anyone who wanted was now allowed through the checkpoints into West Berlin, effective immediately. Thousands of people showed up within hours, demanding to be let through (after all, it had been officially announced, as far as they knew.) The overwhelmed border guards didn’t want to shoot that many people, so after a bit of conferring, they gave in and let everyone through.

There were plenty of cracks already in the USSR’s hold on power, but like a tap to the side of a bottle of supercooled water, this one little mistake caused a knowledge cascade. The thousands of people who showed up at the checkpoint (and didn’t get shot) now knew that there were thousands of other people who agreed with them–and soon that knowledge spread to everyone else in East Germany and the rest of the USSR.

The difficulty with predicting when a regime will fall is the difficulty of predicting a random tap to the bottle or a little dust for the first crystals to form around–and that’s assuming you have a state that has already lost legitimacy in the eyes of most of its citizens. If it hasn’t, that same tap does nothing–and unfortunately, states are much more complicated than bottles of water, and so involve a lot more variables than just temperature.

It’s getting late, but I think this suggests that the thing for most regimes (not even official regimes) is not to control legitimacy (that’s hard if, say, the peasants are starving), but to control what people know and make sure they’re convinced that if they step out of line, they will get shot. So long as shooting is on the table, even people who don’t like the regime will go along and enforce it by shooting dissidents.

The whole point of purity spirals and outrage mobs, then, may be to enforce the idea to people that “if you cross this line, you will get [metaphorically] shot” to people thinking of defecting from ideologies within a culture. It doesn’t even matter if the people being destroyed by the mob actually did anything wrong, so long as the mob is effective at destruction.

When GDP is a Scam

Look, just… think about this for a moment. How does a group in which 65% are unemployed and can’t speak the language add anything to the economy?

You could do some sleight of hand where a few very productive workers, a random billionaire like Carlos Slim or Steve Jobs, get lumped in with a bunch of people they have nothing to do with on the grounds that they are all “Syrian,” and come up with numbers like this, but that would be a sleight of hand. It’s deceptive, and it’s obviously lumping together completely disparate groups in order to make one group look much better than it is.

How to tell if someone is lying to you with GDP:

  1. Consider splitting the population in two. Say, Germans and non-Germans. Would GDP for one group go down?If yes, then you’re being lied to. Simply “growing” the GDP by drawing a bigger circle on the map and adding up the GDP inside of it doesn’t mean the people inside the circle have more money. It just means you drew a bigger circle.
  2. Is this GDP “growth” entirely due to government spending to deal with the problems brought by the newcomers, like funding schools to teach them German?If yes, then it is not real spending. Everyone who was already in Germany was just taxed to provide service to non-Germans. You could do that without anyone immigrating to Germany.
  3.  

    Does this “growth” translate into a higher standard of living or happiness for the average or median German?

    If not, you are being lied to.

  4.  

    If you could magically replace every one of those migrants with a German of the same age and sex, would GDP go up or down?
    If Up, then these migrants are a bad investment.

  5. If these immigrants are so great for the German economy, why were they so bad for their home economies?

Let’s summarize these varieties of “GDP” abuse: 1. Not looking at Per Capita, 2. “Growth” that doesn’t help the locals, 3. Growth that goes to elites, 4. Opportunity cost, 5. Not making any sense.

GDP is one way to measure the economy, but it isn’t the only way–and it certainly isn’t the only thing that matters.

Let us consider the simplest possible economy:

Joe and Bob are sailors who have been marooned on a desert island. They have plenty of fish to eat and two coconuts, which they use as currency. When Joe wants something from Bob, he pays him one Coconut. When Bob wants something from Joe, he pays him one Coconut.

Bob and Joe are also economists, so they keep track of their local GDP, tallying up every Coconut exchange. After two months, they have a robust GDP of 4,000 Coconuts per month–and then the coconuts sprout. For the next two months, Bob and Joe abandon the Coconut and just do favors for each other. The economy crashes. The island has a GDP of 0 Coconuts–but oddly, Bob and Joe are doing just as well and eating just as much fish as before.

You might be objecting that the Coconut Economy is a bit contrived, but it is relevant to our real world.

Over in our economy, if I hire a maid to vacuum my carpet, this is “economic activity” and it counts toward the GDP–but If I vacuum it myself, it doesn’t count. If I am hired as a nanny to take care of someone else’s children, this is “economic activity,” but if I take care of my own children, it doesn’t count. Even hiring a prostitute is “economic activity” as far as GDP is concerned, but having sex for free is not.

It is easy to see how something like “women entering the workforce” might generate a substantial GDP boost–while not actually changing anything but the location where particular jobs are done. There is no more economic value to vacuuming my carpet or yours. It’s all carpet.

There are plenty of non-economic arguments relevant here. I might be happier hiring someone to vacuum my carpet so I can spend my time on other activities, or I might be happier tidying up my own home to get everything just the way I like it. I might provide more nurturing childcare to my own children, or a professional might be able to invest in expensive toys and games that can be enjoyed by the many children in their care. People may enjoy consensual sex with their spouses more than with prostitutes–or may hate people and just want to visit the occasional prostitute. Etc.

All of these nuances of life are missed in any simple argument about GDP.

Any argument that hinges solely on what immigrants add (or subtract) from GDP is misguided because ultimately, people don’t exist to make economies better–economies exist to make people better.

Would you try to “improve” the lives of the Amish by inviting a bunch of Silicon Valley executives to settle in their area and boost GDP?

Of course not. It wouldn’t even be a meaningful exercise–the Amish are perfectly happy the way they are. Different people want different things in life. Sometimes that’s more money. Sometimes it’s community, free time, health, or nature. The Silicon Valley folks wouldn’t be too happy having to deal with the Amish, either.

About 30 years ago, economists-on-TV and pundits became convinced that the key to the economy was “spending” and therefore we had to convince people to spend more. Tracking things like “how much people spend on Christmas shopping” became an annual theme.

Of course, spending comes at a cost (future saving), and future saving comes at a cost (current consumption).

Here is a good article on why GDP and CPI are Broken:

Imagine China is implementing a mercantilist policy. An American spends cash to buy goods exported from China. The Chinese recycle the money earned from exporting goods to America into buying mortgage backed securities. The American takes out a big mortgage to buy a house, mortgage bought by the Chinese. The American is effectively borrowing from the Chinese in order to fund current consumption. The net result is that America is a net seller of home equity and in return has recieved goods. The price of homes will be pushed up. In the GDP statistics this will actually show up as economic growth (since the cheap Chinese goods will push down the GDP deflator). But in reality, there has been no growth, the U.S. is simply selling off its own wealth and getting poorer.

Of course, this is what is actually happening.

Meanwhile:

The GDP numbers do not include depreciation. So if existing infrastructure is crumbling faster than it gets replaced, GDP might show the country as actually growing while in reality things are falling apart.

If the entire city of Detroit gets destroyed in riots, fires, crime waves, and ethnic cleansing, and is left in ruins, this catastrophe not show up in GDP numbers. In fact, GDP might actually increase since the destruction would spur the creation of new housing (which does show up in the numbers) and the average home price might actually fall (reducing the GDP deflator) due to violence making the neighborhoods unlivable.

GDP for GDP’s sake is burning houses down just so you can rebuild them–or importing foreigners just so you can spend money to teach them to speak your language.

A Libertarian Writes about Human Sacrifice

 

800px-Castes_and_tribes_of_southern_India._Assisted_by_K._Rangachari_28190929_281478342954529
Meriah Sacrifice post, Khond people, India

I’ve written before about Peter Leeson’s work–he tends to write papers with exciting names like An-Aarrgh-chy: The Law and Economics of Pirate Organization.

Today I encountered Lesson’s less amusingly named but no less interesting paper on Human Sacrifice [PDF], whose abstract is one of the most libertarian things I have ever read:

This paper develops a theory of rational human sacrifice: the purchase and ritual slaughter of innocent persons to appease divinities. I argue that human sacrifice is a technology for protecting property rights. It improves property protection by destroying part of sacrificing communities’ wealth, which depresses the expected payoff of plundering them. … Human sacrifice is spectacular, publicly communicating a sacrificer’s destruction far and wide. Further, immolating a live person is nearly impossible to fake… To incentivize community members to contribute wealth for destruction, human sacrifice is presented as a religious obligation. To test my theory I investigate human sacrifice as practiced by the most significant and well-known society of ritual immolators in the modern era: the Konds of Orissa, India.

Of course, it is not exactly a rigorous test, but it is still an interesting case.

Leeson’s argument is not as crazy as it sounds at first glance. Suppose you have two very similar villages living near each other; neither has any particular advantage over the other. Each produces food each year, but food production varies due to natural vicissitudes. Some years village A produces more food; some years village B produces more food. Let’s suppose A has more food. Village B might decide to go steal some of A’s food. But war is expensive: B will only want to go to war if they can reasonably hope to steal more than the cost of war.

This sets up a situation where A has two potential war-avoiding strategies. A can pay off B, giving them enough of their surplus food to make them not want to go to war, or A can burn their surplus food, making war pointless.

The first option is a good idea if you have some hope of someday trading for surpluses with Village B in the future; the second option is a good idea if Village B is full of treacherous backstabbers and you’d rather burn your crops than let them have a crumb.

Burning crops is all well and good, but what if your enemies don’t believe that you’ve really burned them? What if the sacrifice is essentially fake, like Prometheus deceiving Zeus by wrapping bull bones in glistening fat? Your enemies might attack you anyway, despite your sacrifice.

Then you need a harder to fake signal, like spending your wealth on expensive trade goods which are then publicly destroyed–and the price and death of slaves, Leeson argues, is particularly difficult to fake.

There follows some math and a description of human sacrifice among the Konds (also spelled Khonds and Kondhs) of India, which I shall quote a bit:

Kond communities sacrificed humans. Their victims were called meriahs. Konds
purchased these persons from meriah sellers called Doms (or Pans) … In principle meriahs could be persons of any age, sex, race, or caste. In practice they were nearly always non-Konds. …

Every community held at least one of these festivals every year. Typically a
single meriah was sacrificed at each festival. But this was a lower bound. Kond country visitors occasionally reported sacrifices of upwards of 20 meriahs at a time
(Selections from the Government of India, 1854, p. 22). The general impression of
British officers who visited Kond country was that “the number of Meriahs annually
immolated” was large — very large — indeed, “far larger than could readily be
credited” (Selections from the Government of India, 1854, p. 28; see also, C.R.,
1846a, p. 61). …
Immolation festivals were large, raucous, three-day parties at which attendees
engaged “in the indulgence of every form of wild riot, and generally of gross excess” (Macpherson, 1865, p. 118). The villages that composed each community took turns
sponsoring the festival—purchasing the meriah and hosting the party. …
These festivals’ main event was the immolation itself, which took place on
the party’s third day. On this day the sponsoring Kond village’s head brought the
meriah, intoxicated with alcohol or opium, to a spot previously appointed for the
sacrifice. …
In some cases the victim’s arms and legs were broken to prevent his motion. After
this and some final prayers, the priest gave the word, and “the crowd throws itself
upon the sacrifice and strips the flesh from the bones, leaving untouched the head
and intestines” (Macpherson, 1865, p. 128). While cutting the victim to pieces in
this fashion was common, Konds sometimes used other modes of immolation — all
of them spectacular and spectacularly brutal — ranging from drowning the victim
in a pit of pig’s blood to beating him to death with brass bangles, always followed
by cutting him into small pieces…
The meriah thus slaughtered, the festival reached its crescendo. The chief gave
a pig or buffalo to the priest and the meriah’s seller, concluding the event. Each
of the participating villages’ representatives took a strip of the corpse’s flesh and
departed for their settlements where they shared it with their village members who
buried the flesh in their fields.

The British, of course, put a stop to the ritual–a classic act of white people destroying POC culture.

You can read the paper yourself and decide if you think the Kond case supports Leeson’s thesis.

The Wikipedia page on the Khonds is not particularly insightful, because the Wikipedians have decided not to allow any British Raj-era sources be used for information. This is, of course, base censorship. The page overall is not up to Wikipedia’s usual standards:

The Kondh are adept land dwellers exhibiting greater adaptability to the forest and hill environment. However, due to development interventions in education, medical facilities, irrigation, plantation and so on, they are forced into the modern way of life in many ways. Their traditional life style, customary traits of economy, political organization, norms, values and world view have been drastically changed in recent times. …

The Kondh family is often nuclear, although extended joint families are also found. Female family members are on equal social footing with the male members in Kondh society, and they can inherit, own, hold and dispose off property without reference to their parents, husband or sons. … Children are never considered illegitimate in Kondh society and inherit the clan name of their biological or adoptive fathers with all the rights accruing to natural born children. The Kondhs have a dormitory for adolescent girls and boys which forms a part of their enculturation and education process. The girls and boys sleep at night in their respective dormitory and learn social taboos, myths, legends, stories, riddles, proverbs amidst singing and dancing the whole night, thus learning the way of the tribe.

Apparently Khonds don’t need to sleep, unlike us mere mortals. It’s just the myth of the noble savage, the peaceful egalitarian who sings and dances all night in harmony with nature.

No explanation is given for the photo of the “meriah sacrifice post.”

Traditionally the Kondh religious beliefs were syncretic combining totemism, animism, Ancestor worship, shamanism and nature worship. … In the Kondh society, a breach of accepted religious conduct by any member of their society invited the wrath of spirits in the form of lack of rain fall, soaking of streams, destruction of forest produce, and other natural calamities. Hence, the customary laws, norms, taboos, and values were greatly adhered to and enforced with high to heavy punishments, depending upon the seriousness of the crimes committed.

This is what pretty much every religion believes.

The practise of traditional religion has almost become extinct today. Many Kondhs converted to Protestant Christianity in the late nineteenth and early twentieth century due to the efforts of the missionaries of the Serampore Mission. … Significantly, as with any culture, the ethical practices of the Kondh reinforce the social and economic practices that define the people. Thus, the sacredness of the earth perpetuates tribal socio-economics, wherein harmony with nature and respect for ancestors is deeply embedded whereas non tribal cultures that neglect the sacredness of the land find no problem in committing deforestation, strip-mining etc., and this has led to a situation of conflict in many instances.[5]

Yes, everyone knows that people who practice slash-and-burn agriculture just love nature.

Say what you will for Leeson’s theory, at least he doesn’t LIE to us.

Infanticide and Cannibalism in Sociobiology

This is a little quote from E. O. Wilson’s Sociobiology that I deleted from the previous post for being a little tangential, but it is still interesting

Guppies (Lebistes reticulatus) are well known for the stabilization of their populations in aquaria by the consumption of their excess young.

So that’s what happened to my pet fish! I always wondered why they seemed to appear and disappear at random. It wasn’t a big enough bowl to logically be losing them in.

Um. Poor guppies.

“Cannibalism is commonplace in the social insects, where it serves as a means of conserving nutrients as well as a precise mechanism for regulating colony size. The colonies of all termite species so far investigated promptly eat their own dead and injured. Cannibalism is in fact so pervasive in termites that it can be said to be a way of life in these insects. …

The eating of immature stages is common in the social Hymenoptera.

Hymenoptera is an order of insects with over 150,000 species, including ants and bees. (Termites, despite also being social, are not members of hymenoptera, and are more closely related to cockroaches.)

Quoting Wikipedia:

Among most or all hymenopterans, sex is determined by the number of chromosomes an individual possesses.[17] Fertilized eggs get two sets of chromosomes (one from each parent’s respective gametes) and develop into diploid females, while unfertilized eggs only contain one set (from the mother) and develop into haploid males. The act of fertilization is under the voluntary control of the egg-laying female, giving her control of the sex of her offspring.[15] This phenomenon is called haplodiploidy.

However, the actual genetic mechanisms of haplodiploid sex determination may be more complex than simple chromosome number. In many Hymenoptera, sex is actually determined by a single gene locus with many alleles.[17] In these species, haploids are male and diploids heterozygous at the sex locus are female, but occasionally a diploid will be homozygous at the sex locus and develop as a male, instead. This is especially likely to occur in an individual whose parents were siblings or other close relatives. Diploid males are known to be produced by inbreeding in many ant, bee, and wasp species. Diploid biparental males are usually sterile but a few species that have fertile diploid males are known.[18]

One consequence of haplodiploidy is that females on average actually have more genes in common with their sisters than they do with their own daughters. Because of this, cooperation among kindred females may be unusually advantageous, and has been hypothesized to contribute to the multiple origins of eusociality within this order.[15][19] In many colonies of bees, ants, and wasps, worker females will remove eggs laid by other workers due to increased relatedness to direct siblings, a phenomenon known as worker policing.[20]

Another consequence is that hymenopterans may be more resistant to the deleterious effects of inbreeding. As males are haploid, any recessive genes will automatically be expressed, exposing them to natural selection. Thus, the genetic load of deleterious genes is purged relatively quickly.[21]

Back to Wilson:

In ant colonies, all injured eggs, larvae, and pupae are quickly consumed. When colonies are starved, workers begin attacking healthy brood as well. In fact, there exists a direct relation between colony hunger and the amount of brood cannibalism that is precise enough to warrant the suggestion that the brood functions normally as a last-ditch food supply to keep the queen and workers alive. In the army ants of the genus Eciton, cannibalism has apparently been further adapted to the purposes of caste determination. According to Schneirla (1971), most of the female larvae in the sexual generation (the generation destined to transform into males and queens) are consumed by workers. The protein is converted into hundred or thousands of males and several of the very large virgin queens. It seems to follow, but is far from proved, that female larvae are determined as queens by this special protein-rich diet. Other groups of ants, bees, and wasps show equally intricate patterns of specialized cannibalism…

E. O. Wilson once said of Marxism, “Wonderful theory, wrong species.”

Nomadic male lions of the Serengeti plains frequently invade the territories of prids and drive away or kill the resident males. The cubs are also sometimes killed and eaten during territorial disputes. … Infant mortality is much higher as a result of the disturbances [in the social order of langurs.] In the case of P. entellus, [a langur species,] the young are actually murdered by the usurper…

 

 

Harry Potter and the Coefficient of Kinship

 

800px-coefficient_of_relatedness
Coefficient of kinship

The main character of the first 4 chapters of Harry Potter isn’t Harry: it’s the Dursleys:

Mr and Mrs Dursley, of number four, Privet Drive, were proud to say that they were perfectly normal, thank you very much. They were the last people you’d expect to be involved in anything strange or mysterious, because they just didn’t hold with such nonsense.

The Dursleys are awful and abusive in an over-the-top, Roald Dahl way that somehow manages not to cause Harry any serious emotional problems, which even I, a hard-core hereditarian, would find improbable if Harry were a real boy. But Harry isn’t the point: watching the Dursleys get their comeuppance is the point.

JRR Tolkien and JK Rowling both focused on the same group of people–common English peasants–but Tolkien’s depiction of the Hobbits are much more sympathetic than Rowling’s Muggles, even if they don’t like adventures:

This hobbit was a very well-to-do hobbit, and his name was Baggins. The Bagginses had lived in the neighborhood of The Hill for time out of mind and people considered them very respectable, not only because most of them were rich, but also because they never had any adventures or did anything unexpected: you could tell what a Baggins would say on any question without the bother of asking him.

We could wax philosophical (or political) about why Tolkien sees common folk as essentially good, despite their provinciality, and why Rowling sees them as essentially bad, for precisely the same reasons, but in the end both writers are correct, for there is good and bad in all groups.

Why are the Dursleys effective villains? Why is their buffoonish abuse believable, and why do so many people identify with young Harry? Is he not the Dursley’s kin, if not their son, their nephew? Shouldn’t they look out for him?

One of the great ironies of life is that the people who are closest to us are also the most likely to abuse us. Despite fears of “stranger danger” (or perhaps because of it) children are most likely to be harmed by parents, step-parents, guardians, or other close relatives/friends of the family, not strangers lurking in alleys or internet chatrooms.

The WHO reports: 

…there were an estimated 57 000 deaths attributed to homicide among children under 15 years of age in 2000. Global estimates of child homicide suggest that infants and very young children are at greatest risk, with rates for the 0–4-year-old age group more than double those of 5–14-year-olds…

The risk of fatal abuse for children varies according to the income level of a country and region of the world. For children under 5 years of age living in high-income countries, the rate of homicide is 2.2 per 100 000 for boys and 1.8 per 100 000 for girls. In low- to middle-income countries the rates are 2–3 times higher – 6.1 per 100 000 for boys and 5.1 per 100 000 for girls. The highest homicide rates for children under 5 years of age are found in the WHO African Region – 17.9 per 100 000 for boys and 12.7 per 100 000 for girls.

(Aside: in every single region, baby boys were more likely to be murdered than baby girls–how’s that “male privilege” for you?)

Estimates of physical abuse of children derived from population-based surveys vary considerably. A 1995 survey in the United States asked parents how they disciplined their children (12). An estimated rate of physical abuse of 49 per 1000 children was obtained from this survey when the following behaviours were included: hitting the child with an object, other than on the buttocks; kicking the child; beating the child; and threatening the child with a knife or gun. …

.In a cross-sectional survey of children in Egypt, 37% reported being beaten or tied up by their parents and 26% reported physical injuries such as fractures, loss of consciousness or permanent disability as a result of being beaten or tied up (17).
. In a recent study in the Republic of Korea, parents were questioned about their behaviour towards their children. Two-thirds of the parents reported whipping their children and 45% confirmed that they had hit, kicked or beaten them (26).
. A survey of households in Romania found that 4.6% of children reported suffering severe and frequent physical abuse, including being hit with an object, being burned or being deprived of food. Nearly half of Romanian parents admitted to beating their children ‘‘regularly’’ and 16% to beating their children with objects (34).
. In Ethiopia, 21% of urban schoolchildren and 64% of rural schoolchildren reported bruises or swellings on their bodies resulting from parental punishment (14).

Ugh. The Dursleys are looking almost decent right now.

In most ways, the Dursleys do not fit the pattern characteristic of most abuse cases–severe abuse and neglect are concentrated among drug-addicted single mothers with more children than they can feed and an unstable rotation of unrelated men in and out of the household. The Dursley’s case is far more mild, but we may still ask: why would anyone mistreat their kin? Wouldn’t natural selection–selfish genes and all that–select against such behavior?

There are a number of facile explanations for the Dursley’s behavior. The first, suggest obliquely by Rowling, is that Mrs. Dursley was jealous of her sister, Lily, Harry’s mother, for being more talented (and prettier) than she was. This is the old “they’re only bullying you because they’re jealous” canard, and it’s usually wrong. We may discard this explanation immediately, as it is simply too big a leap from “I was jealous of my sister” to “therefore I abused her orphaned child for 11 years.” Most of us endured some form of childhood hardship–including sibling rivalry–without turning into abusive assholes who lock little kids in cupboards.

The superior explanation is that there is something about Harry that they just can’t stand. He’s not like them. This is expressed in Harry’s appearance–the Dursleys are described as tall, fat, pink skinned, and blue eyed with straight, blond hair, while Harry is described as short, skinny, pale skinned, and green-eyed with wavy, dark hair.

More importantly, Harry can do magic. The Dursley’s can’t.

It’s never explained in the books why some people can do magic and not others, but the trait looks strongly like a genetic one–not much more complicated than blue eyes. Magic users normally give birth to magical children, and non-magic users (the term “muggle” is an ethnic slur and should be treated as such,) normally have non-magical children. Occasionally magical children are born to regular families, just as occasionally two brown-eyed parents have a blue-eyed child because both parents carried a recessive blue eyed gene that they both happened to pass on to their offspring, and occasionally magical parents have regular children, just as smart people sometimes have dumb offspring. On the whole, however, magical ability is stable enough across generations that there are whole magical families that have been around for hundreds of years and non-magical families that have done the same.

Any other factor–environmental, magical–could have been figured out by now and used to turn kids like Neville into competent wizards, so we conclude that such a factor does not exist.

Magic is a tricky thing to map, metaphorically, onto everyday existence, because nothing like it really exists in our world. We can vaguely imagine that Elsa hiding her ice powers is kind of like a gay person hiding the fact that they are gay, but being gay doesn’t let you build palaces or create sentient snowmen. Likewise, the Dursely’s anger at Harry being “one of them” and adamantly claiming that magic and wizardry don’t exist, despite the fact that they know very well that Mrs. Dursley’s sister could turn teacups into frogs, does resemble the habit of certain very conservative people to pretend that homosexuality doesn’t exist, or that if their children never hear that homosexuality exists, they’ll never become gay.

The other difficulty with this metaphor is that gay people, left to their own devices, don’t produce children.

But putting together these two factors, we arrive at the conclusion that wizards are a distinct, mostly endogamous ethnic group that the Dursleys react to as though they were flaming homosexuals.

How many generations of endogamy would it take to produce two genetically distinct populations from one? Not many–take, for example, the Irish Travellers:

Researchers led by the Royal College of Surgeons in Ireland (RCSI) and the University of Edinburgh analysed genetic information from 42 people who identified as Irish Travellers.

The team compared variations in their DNA code with that of 143 European Roma, 2,232 settled Irish, 2,039 British and 6,255 European or worldwide individuals. …

They found that Travellers are of Irish ancestral origin but have significant differences in their genetic make-up compared with the settled community.

These differences have arisen because of hundreds of years of isolation combined with a decreasing Traveller population, the researchers say. …

The team estimates the group began to separate from the settled population at least 360 years ago.

That’s a fair bit of separation for a mere 360 years or so–and certainly enough for your relatives to act rather funny about it if you decided to run off with Travellers and then your orphaned child turned up on their doorstep.

How old are the wizarding families? Ollivander’s Fine Wands has been in business since 382 BC, and Merlin, Agrippa, and Ptolemy are mentioned as ancient Wizards, so we can probably assume a good 2,000 years of split between the two groups, with perhaps a 10% in-migration of non-magical spouses.

Harry is, based on his parents, 50% magical and 50% non-magical, though of course both Lily and Petunia Dursley probably carry some Wizard DNA.

In The Blank Slate, Pinker has some interesting observations on the subject of sociobiology:

As the notoriety of Sociobiology grew in the ensuing years, Hamilton and Trivers, who had thought up many of the ideas, also became targets of picketers… Trivers had argued that sociobiology is, if anything a force for political progress. It is rooted in the insight that organisms did not evolve to benefit their family, group, or species, because the individuals making up those groups have genetic conflicts of interest with one another and would be selected to defend those interests. This immediately subverts the comfortable belief that those in power rule for the good of all, and it throws a spotlight on hidden actors in the social world, such as female sand the younger generation.

Further in the book, Pinker continues:

Tolstoy’s famous remark that happy families are all alike but every unhappy family is unhappy in its own way is not true at the level of ultimate (evolutionary) causation. Trivers showed how the seeds of unhappiness in every family have the same underlying source. Though relatives have common interests because of their common genes, the degree of overlap is not identical within all their permutations and combinations of family members. Parents are related to all of their offspring by an equal factor, 50 percent, but each child is related to himself or herself by a factor of 100 percent. …

Parental investment is a limited resource. A day has only twenty-four hours … At one end of the lifespan, children learnt hat a mother cannot pump out an unlimited stream of milk; at the other, they learn that parents do not leave behind infinite inheritances.

To the extent that emotions among people reflect their typical genetic relatedness, Trivers argued, the members of a family should disagree on how parental investment should be divvied up.

And to the extent that one of the children in a household is actually a mixed-ethnicity nephew and no close kin at all to the father, the genetic relationship is even more distant between Harry and the Dursleys than between most children and the people raising them.

Parents should want to split their investment equitably among the children… But each child should want the parent to dole out twice as much of the investment to himself or herself as to a sibling, because children share half their genes with each full sibling but share all their genes with themselves. Given a family with two children and one pie, each child should want to split it in a ratio of two thirds to one third, while parents should want it to be split fifty fifty.

A person normally shares about 50% of their genes with their child and 25% of their genes with a niece or nephew, but we also share a certain amount of genes just by being distantly related to each other in the same species, race, or ethnic group.

Harry is, then, somewhat less genetically similar than the average nephew, so we can expect Mrs. Dursley to split any pies a bit less than 2/3s for Dudley and 1/3 for Harry, with Mr. Dursley grumbling that Harry doesn’t deserve any pie at all because he’s not their kid. (In a more extreme environment, if the Dursleys didn’t have enough pie to go around, it would be in their interest to give all of the pie to Dudley, but the Dursleys have plenty of food and they can afford to grudgingly keep Harry alive.)

Let’s check in with E. O. Wilson’s Sociobiology:

Most kinds of social behavior, including perhaps all of the most complex forms, are based in one way or another on kinship. As a rule, the closer the genetic relationship of the members of a group, the more stable and intricate the social bonds of its members. …

Parent-offspring conflict and its obverse, sibling-sibling conflict, can be seen throughout the animal kingdom. Littermates or nestmates fight among themselves sometimes lethally, and fight with their mothers over access to milk, food, and care…. The conflict also plays out in the physiology of prenatal human development. Fetuses tap their mothers’ bloodstreams to mine the most nutrients possible from their body, while the mother’s body resists to keep it in good shape for future children. …

Trivers touted the liberatory nature of sociobiology by invoking an “underlying symmetry in our social relationships” and “submerged actors in the social world.” He was referring to women, as we will see in the chapter on gender, and to children. The theory of parent-offspring conflict says that families do not contain all-powerful, all-knowing parents and their passive, grateful children. …

Sometimes families contain Dursleys and Potters.

Most profoundly, children do not allow their personalities to be shaped by their parents’ nagging, blandishments, or attempts to serve as role models.

Quite lucky for Harry!

Quoting Trivers:

The offspring cannot rely on its parents for disinterested guidance. One expects the offspring to be preprogrammed to resist some parental manipulation while being open to other forms. When the parent imposes an arbitrary system of reinforcement (punishment and reward) in order to manipulate the offspring to act against its own best interests, selection will favor offspring that resist such schedules of reinforcement.

(Are mixed-race kids more likely to be abused than single-race kids? Well, they’re more likely to be abused than White, Asian, or Hispanic kids, but less likely to be abused than Black or Native American children [Native American children have the highest rates of abuse]. It seems likely that the important factor here isn’t degree of relatedness, but how many of your parents hail from a group with high rates of child abuse. The Dursleys are not from a group with high child abuse rates.)

Let us return to E. O. Wilson’s Sociobiology:

Mammalogists have commonly dealt with conflict as if it were a nonadaptive consequence of the rupture of the parent-offspring bond. Or, in the case of macaques, it has been interpreted as a mechanism by which the female forces the offspring into independence, a step designed ultimately to benefit both generations. …

A wholly different approach to the subject has been taken by Trivers (1974). … Trivers interprets it as the outcome of natural selection operating in opposite directions on the two generations. How is it possible for a mother and her child to be in conflict and both remain adaptive? We must remember that the two share only one half their genes by common descent. There comes a time when it is more profitable for the mother to send the older juvenile on its way and to devote her efforts exclusively to the production of a new one. To the extent that the first offspring stands a chance to achieve an independent life, the mother is likely to increase (and at most, double,) her genetic representation in the next breeding generation by such an act. But the youngster cannot be expected to view the matter in this way at all. …

If the mothers inclusive fitness suffers first from the relationship, conflict will ensue.

At some point, of course, the child is grown and therefore no longer benefits from the mother’s care; at this point the child and mother are no longer in conflict, but the roles may reverse as the parents become the ones in need of care.

As for humans:

Consider the offspring that behaves altruistically toward a full sibling. If it were the only active agent, its behavior would be selected when the benefit to the sibling exceeds two times the cost to itself. From the mother’s point of view, however, inclusive fitness is gained however the benefit to the sibling simply exceeds the cost to the altruist. Consequently, there is likely to evolve a conflict between parents and offspring in the attitudes toward siblings: the parent will encourge more altruism than the youngster is prepared to give. The converse argument also holds: the parent will tolerate less selfishness and spite among siblings than they have a tendency to display…

Indeed, Dudley is, in his way, crueler (more likely to punch Harry) and more greedy than even his parents.

Altruistic acts toward a first cousin are ordinarily selected if the benefit to the cousin exceeds 8 times the cost to the altruist, since the coefficient of relationship of first cousins is 1/8. However, the parent is related to its nieces and nephews by r=1/4, and it should prefer to see altruistic acts by its children toward their cousins whenever the benefit-to-cost ratio exceeds 2. Parental conscientiousness will also extend to interactions with unrelated individuals. From a child’s point of view, an act of selfishness or spite can provide a gain so long as its own inclusive fitness is enhanced… In human terms, the asymmetries in relationship and the differences in responses they imply will lead in evolution to an array of conflicts between parents and their children. In general, offspring will try to push their own socialization in a more egoistic fashion, while the parents will repeatedly attempt to discipline the children back to a higher level of altruism. There is a limit to the amount of altruism [healthy, normal] parents want to see; the difference is in the levels that selection causes the two generations to view as optimum.

To return to Pinker:

As if the bed weren’t crowded enough, every child of a man and a woman is also the grandchild of two other men and two other women. Parents take an interest in their children’s reproduction because in the long run it is their reproduction, too. Worse, the preciousness of female reproductive capacity makes it a valuable resource for the men who control her in traditional patriarchal societies, namely her father and brothers. They can trade a daughter or sister for additional wives or resources for themselves and thus they have an interest in protecting their investment by keeping her from becoming pregnant by men other than the ones they want to sell her to. It is not just the husband or boyfriend who takes a proprietary interest in a woman’s sexual activity, then, but also her father and brothers. Westerners were horrified by the treatment of women under the regime of the Taliban in Afghanistan from 1995 to 2001…

[ah such an optimist time Pinker wrote in]

Like many children, Harry is rescued from a bad family situation by that most modern institution, the boarding school.

The weakening of parents’ hold over their older children is also not just a recent casualty of destructive forces. It is part of a long-running expansion of freedom in the West that has granted children their always-present desire for more autonomy than parents are willing to cede. In traditional societies, children were shackled to the family’s land, betrothed in arranged marriages, and under the thumb of the family patriarch. That began to change in Medieval Europe, and some historians argue it was the first steppingstone in the expansion of rights that we associate with the Enlightenment and that culminated in the abolition of feudalism and slavery. Today it is no doubt true that some children are led astray by a bad crowd or popular culture. But some children are rescued from abusive or manipulative families by peers, neighbors, and teachers. Many children have profited from laws, such as compulsory schooling and the ban on forced marriages, that may override the preferences of their parents.

The sad truth, for Harry–and many others–is that their interests and their relatives’ interests are not always the same. Sometimes humans are greedy, self-centered, or just plain evil. Small children are completely dependent on their parents and other adults, unable to fend for themselves–so the death of his parents followed by abuse and neglect by his aunt and uncle constitute true betrayal.

But there is hope, even for an abused kid like Harry, because we live in a society that is much larger than families or tribal groups. We live in a place where honor killings aren’t common and even kids who aren’t useful to their families can find a way to be useful in the greater society. We live in a civilization.

Links Post: Evolution and More

road-to-bigger-brains
From State of the Science: Finding Human Ancestors in New Places

The Puerto Rican rainforest is beautiful and temporarily low on bugs. (Bugs, I suspect, evolve quickly and so can bounce back from these sorts of collapses–but they are collapses.)

More evidence for an extra Neanderthal or Denisovan interbreeding event in East Asians and Melanesian genomes:

 In addition to the reported Neanderthal and Denisovan introgressions, our results support a third introgression in all Asian and Oceanian populations from an archaic population. This population is either related to the Neanderthal-Denisova clade or diverged early from the Denisova lineage.

(Congratulations to the authors, Mondal, Bertranpetit, and Lao.)

Really interesting study on gene-culture co-evolution in Northeast Asia:

Here we report an analysis comparing cultural and genetic data from 13 populations from in and around Northeast Asia spanning 10 different language families/isolates. We construct distance matrices for language (grammar, phonology, lexicon), music (song structure, performance style), and genomes (genome-wide SNPs) and test for correlations among them. … robust correlations emerge between genetic and grammatical distances. Our results suggest that grammatical structure might be one of the strongest cultural indicators of human population history, while also demonstrating differences among cultural and genetic relationships that highlight the complex nature of human cultural and genetic evolution.

I feel like there’s a joke about grammar Nazis in here.

Why do we sleep? No one knows.

While humans average seven hours, other primates range from just under nine hours (blue-eyed black lemurs) to 17 (owl monkeys). Chimps, our closest living evolutionary relatives, average about nine and a half hours. And although humans doze for less time, a greater proportion is rapid eye movement sleep (REM), the deepest phase, when vivid dreams unfold.

Sleep is pretty much universal in the animal kingdom, but different species vary greatly in their habits. Elephants sleep about two hours out of 24; sloths more than 15. Individual humans vary in their sleep needs, but interestingly, different cultures vary greatly in the timing of their sleep, eg, the Spanish siesta. Our modern notion that people “should” sleep in a solid, 7-9 hour chunk (going so far as to “train” children to do it,) is more a result of electricity and industrial work schedules than anything inherent or healthy about human sleep. So if you find yourself stressed out because you keep taking a nap in the afternoon instead of sleeping through the night, take heart: you may be completely normal. (Unless you’re tired because of some illness, of course.)

Interestingly:

Within any culture, people also prefer to rest and rise at different times: In most populations, individuals range from night owls to morning larks in a near bell curve distribution. Where someone falls along this continuum often depends on sex (women tend to rise earlier) and age (young adults tend to be night owls, while children and older adults typically go to bed before the wee hours).

Genes matter, too. Recent studies have identified about a dozen genetic variations that predict sleep habits, some of which are located in genes known to influence circadian rhythms.

While this variation can cause conflict today … it may be the vestige of a crucial adaptation. According to the sentinel hypothesis, staggered sleep evolved to ensure that there was always some portion of a group awake and able to detect threats.

So they gave sleep trackers to some Hadza, who must by now think Westerners are very strange, and found that at any particular period of the night, about 40% of people were awake; over 20 nights, there were “only 18 one-minute periods” when everyone was asleep. That doesn’t prove anything, but it does suggest that it’s perfectly normal for some people to be up in the middle of the night–and maybe even useful.

Important dates in the evolution of human brain genes found:

In May, a pair of papers published by separate teams in the journal Cell focused on the NOTCH family of genes, found in all animals and critical to an embryo’s development: They produce the proteins that tell stem cells what to turn into, such as neurons in the brain. The researchers looked at relatives of the NOTCH2 gene that are present today only in humans.

In a distant ancestor 8 million to 14 million years ago, they found, a copying error resulted in an “extra hunk of DNA,” says David Haussler of the University of California, Santa Cruz, a senior author of one of the new studies.

This non-functioning extra piece of NOTCH2 code is still present in chimps and gorillas, but not in orangutans, which went off on their own evolutionary path 14 million years ago.

About 3 million to 4 million years ago, a few million years after our own lineage split from other apes, a second mutation activated the once non-functional code. This human-specific gene, called NOTCH2NL, began producing proteins involved in turning neural stem cells into cortical neurons. NOTCH2NL pumped up the number of neurons in the neocortex, the seat of advanced cognitive function. Over time, this led to bigger, more powerful brains. …

The researchers also found NOTCH2NL in the ancient genomes of our closest evolutionary kin: the Denisovans and the Neanderthals, who had brain volumes similar to our own.

And finally, Differences in Genes’ Geographic Origins Influence Mitochondrial Function:

“Genomes that evolve in different geographic locations without intermixing can end up being different from each other,” said Kateryna Makova, Pentz Professor of Biology at Penn State and an author of the paper. “… This variation has a lot of advantages; for example, increased variation in immune genes can provide enhanced protection from diseases. However, variation in geographic origin within the genome could also potentially lead to communication issues between genes, for example between mitochondrial and nuclear genes that work together to regulate mitochondrial function.”

Researchers looked at recently (by evolutionary standards) mixed populations like Puerto Ricans and African Americans, comparing the parts of their DNA that interact with mitochondria to the parts that don’t. Since mitochondria hail from your mother, and these populations have different ethnic DNA contributions along maternal and paternal lines. If all of the DNA were equally compatible with their mitochondria, then we’d expect to see equal contributions to the specifically mitochondria-interacting genes. If some ethnic origins interact better with the mitochondria, then we expect to see more of this DNA in these specific places.

The latter is, in fact, what we find. Puerto Ricans hail more from the Taino Indians along their mtDNA, and have relatively more Taino DNA in the genes that affect their mitochondria–indicating that over the years, individuals with more balanced contributions were selected against in Puerto Rico. (“Selection” is such a sanitized way of saying they died/had fewer children.)

This indicates that a recently admixed population may have more health issues than its parents, but the issues will work themselves out over time.