Cyborg Dreams: Alita Review with Spoilers

220px-battle_angel_alita_28issue_1_-_cover29This is a review for Alita: Battle Angel, now out in theaters. If you want the review without spoilers, scroll down quickly to the previous post.

It is difficult for any movie to be truly deep. Is Memento deep, or does it just use a backwards-narrative gimmick? Often meaning is something we bring to movies–we interpret them based on our own experiences.

What is the point of cyborgs? They are the ultimate fusion of man and machine. Our technology doesn’t just serve us; it has become us.  What are we, then? Are cyborgs human, or more than human? And what of the un-enhanced meatsacks left behind?

Throughout the movie, we see humans with various levels of robotic enhancement, from otherwise normal people with an artificial limb to monstrous brawlers that are almost unrecognizable as human. Alita is a complete cyborg whose only “human” remain left is her biological brain (perhaps she has a skull, too.) The rest of her, from heart to toes, is machine, and can be disassembled and replaced as necessary.

The graphic novels go further than Alita–in one case, a whole community breaks down after it discovers that the adults have had their brains replaced with computer chips. Can a “human” have a metal body but a meat brain? Can a “human” have a meat body but a computer brain? Alita says yes, that humanity is more than just the raw material we are built of.

(It also goes much less–is Ido’s jet-powered hammer that he uses in battle any different from a jet-powered hammer built into your arm? Does it matter whether you can put the technology down and pack it into a suitcase at the end of the day, or if it is built into your core?)

Yet cyborgs in Alita’s world, despite their obvious advantage over mere humans in terms of speed, reflexes, strength, and ability to switch your arms out for power saws, are mostly true to their origin as disabled people whose bodies were replaced with artificial limbs. Alita’s first body, given to her at the beginning of the movie after she is found without one, was originally built for a little girl in a wheelchair. She reflects to a friend that she is now fast because the little girl’s father built her a fast pair of legs so she could finally run.

The upper class–to the extent that we see them–has no obvious enhancements. Indeed, the most upper class family we meet in the movie, which originally lived in the floating city of Tiphares (Zalem in the movie) was expelled from the city and sent down to the scrap yard with the rest of the trash because of their disabled daughter–the one whose robotic body Alita inherits.

Hugo is an ordinary meat boy with what we may interpret as a serious prejudice against cyborgs–though he comes across as a nice lad, he moonlights as a thief who who kidnaps cyborgs and chops off their body parts for sale on the black market. Hugo justifies himself by claiming he “never killed anyone,” which is probably true, but the process certainly hurts the cyborgs (who cry out in pain as their limbs are sawed off,) and leaves them lying disabled in the street.

Hugo isn’t doing it because he hates cyborgs, though. They’re just his ticket to money–the money he needs to get to Tiphares/Zalem. For even though it is said that no one in the Scrap Yard (Iron City in the movie) is ever allowed into Tiphares, people still dream of Heaven. Hugo believes a notorious fixer named Vector can get him into Tiphares if he just pays him enough money.

Some reviewers have identified Vector as the Devil himself, based on his line, “Better to reign in Hell than serve in Heaven,” which the Devil speaks in Milton’s Paradise Lost–though Milton is himself reprising Achilles in the Odyssey, who claims, “By god, I’d rather slave on earth for another man / some dirt-poor tenant farmer who scrapes to keep alive / than rule down here over all the breathless dead.” 

Yet the Scrap Yard is not Hell. Hell is another layer down; it is the sewers below the Scrap Yard, where Alita’s first real battle occurs. The Scrap Yard is Purgatory; the Scrap Yard is Earth, suspended between both Heaven and Hell, from which people can chose to arise (to Tiphares) or descend (to the sewers.) But whether Tiphares is really Heaven or just a dream they’ve been sold remains to be seen–for everyone in the Scrap Yard is fallen and none may enter Heaven.

Alita, you probably noticed, descended into Hell to fight an evil monster–in the manga, because he kidnapped a baby; in the movie because he was trying to kill her. In the ensuing battle, she is crushed and torn to pieces, sacrificing her final limb to drill out the monster’s eye. Her unconscious corpse is rescued by her friends, dragged back to the surface, and then rebuilt with a new body.

“I do not stand by in the presence of evil”–Alita

But let me reveal to you a wonderful secret. We will not all die, but we will all be transformed! 52 It will happen in a moment, in the blink of an eye, when the last trumpet is blown. For when the trumpet sounds, those who have died will be raised to live forever. And we who are living will also be transformed. 53 For our dying bodies must be transformed into bodies that will never die; our mortal bodies must be transformed into immortal bodies.” 1 Corinthians, 15 

Alita has died and been resurrected. Whether she will ascend into Heaven remains a matter for the sequel. (She does. Obviously.)

Through his relationship with Alita (they smooch), Hugo realizes that cyborgs are people, too, and maybe he shouldn’t chop them up for money.  “You are more human than anyone I know,” he tells her.

Alita, in a scene straight from The Last Temptation of Christ, offers Hugo her heart–literally–to sell to raise the remaining money he needs to make it to Tiphares.

Hugo, thankfully, declines the offer, attempting to make it to Tiphares on his own two feet (newly resurrected after Alita saves his life by literally hooking him up to her own life support system)–but no mere mortal can ascend to Tiphares; even giants may not assault the gates of Heaven.

The people of the Scrap Yard are fallen–literally–from Tiphares, their belongings and buildings either relics from the time before the fall or from trash dumped from above. There is hope in the Scrap Yard, yet the Scrap Yard generates very little of its own, explaining its entirely degraded state.

This is a point where the movie fails–the set builders made the set too nice. The Scrap Yard is a decaying, post-apocalyptic waste filled with strung-out junkies and hyper-violent-TV addicts. In one scene in the manga, Doc Ido, injured, collapses in the middle of a crowd while trying to drag the remains of Alita’s crushed body back home so he can fix her. Bleeding, he cries out for help–but the crowd, entranced by the story playing out on the screens around them, ignores them both.

In the movie, the Scrap Yard has things like oranges and chocolate–suggesting long-distance trade and large-scale production–things they really shouldn’t be able to do. In the manga, the lack of police makes sense, as this is a society with no ability to cooperate for the common good. Since the powers that be would like to at least prevent their own deaths at the hands of murderers, the Scrap Yard instead puts bounties on the heads of criminals, and licensed “Hunter Warriors” decapitate them for money.

(A hunter license is not difficult to obtain. They hand them out to teenage girls.)

Here the movie enters its discussion of Free Will.

Alita awakes with no memory of her life before she became a decapitated head sitting in a landfill. She has the body of a young teen and, thankfully, adults willing to look out for her as she learns about life in Iron City from the ground up–first, that oranges have to be peeled; second, that cars can run you over.

The movie adds the backstory about Doc Ido’s deceased, disabled daughter for whom he built the original body that he gives to Alita. This is a good move, as it makes explicit a relationship that takes much longer to develop in the manga (movies just don’t have the same time to develop plots as a manga series spanning decades.) Since Alita has no memory, she doesn’t remember her own name (Yoko). Doc therefore names her “Alita,” after the daughter whose body she now wears.

As an adopted child myself, I feel a certain kinship with narratives about adoption. Doc wants his daughter back. Alita wants to discover her true identity. Like any child, she is growing up, discovering love, and wants different things for her life than her father does.

Despite her amnesia, Alita has certain instincts. When faced with danger, she responds–without knowing how or why–with a sudden explosion of violence, decapitating a cyborg that has been murdering young women in her neighborhood. Alita can fight; she is extremely skilled in an advanced martial art developed for cyborgs. In short, she is a Martian battle droid that has temporarily mistaken itself for a teenage girl.

She begs Ido to hook her up to a stronger body (the one intended for his daughter was not built with combat in mind,) but he refuses, declaring that she has a chance to start over, to become something totally new. She has free will. She can become anything–so why become a battle robot all over again?

But Alita cannot just remain Doc’s little girl. Like all children, she grows–and like most adopted children, she wants to know who she is and where she comes from. She is good at fighting. This is her only connection to her past, and as she asserts, she has a right to that. Doc Ido has no right to dictate her future.

What is Alita? As far as she knows, she is trash, broken refuse literally thrown out through the Tipharean rubbish chute. The worry that you were adopted because you were unwanted by your biological parents–thrown away–plagues many adopted children. But as Alita discovers, this isn’t true. She’s not trash–she’s an alien warrior who once attacked Earth and ended up unconscious in the scrap yard after losing most of her body in the battle. Like the Nephilim, she is a heavenly battle angel who literally fell to Earth.

By day, Ido is a doctor, healing people and fixing cyborgs. By night, he is a Hunter Warrior, killing people. For Ido, killing is expression of rage after his daughter’s death, a way of channeling a psychotic impulse into something that benefits society by aiming it at people even worse than himself. But for Alita, violence serves a greater purpose–she uses her talent to eliminate evil and serve justice. Alita’s will is to protect the people she loves.

After Alita runs away, gets in a fight, descends into Hell, and is nearly completely destroyed, Doc relents and attaches her to a more powerful, warrior body. He recognizes that time doesn’t freeze and he cannot keep Alita forever as his daughter (a theme revisited later in the manga when Nova tries to trap Alita in an alternative-universe simulation where she never becomes a Hunter Warrior.

In an impassioned speech, Nova declares, “I spit upon the second law of thermodynamics!” He wants to freeze time; prevent decay. But even Nova, as we have seen, cannot contain Alita’s will. She knows it is a simulation. She plays along for a bit, enjoying the story, then breaks out.

Alita’s new body uses “nanotechnology,” which is to say, magic, to keep her going. Indeed, the technology in the movie is no more explained than magic in Harry Potter, other than some technobabble about how Alita’s heart contains a miniature nuclear reactor that could power the whole city, which is how she was able to stay alive for 300 years in a trash heap.

With her more powerful body, Alita is finally able to realize herself.

Alita’s maturation from infant (a living head completely unable to move,) to young adult is less explicit in the movie than in the manga, but it is still there–with the reconfiguration of her new body based on Alita’s internal self-image, Doc discovers that “She is a bit older than you thought she was.” In a dream sequence in the original, the metaphors are made explicit–limbless Alita in one scene becomes an infant strapped to Doc’s back as he roots through the dump for parts. Then she receives a pair of arms, and finally legs, turning into a toddler and a girl. Finally, with her berserker body, she achieves adulthood.

But with all of this religious imagery, is Tiphares really heaven? Of course not–if it were, why would Nova–who is the true villain trying to kill her–live there? There was a war in the Heavens–but the Heavens are far beyond Tiphares. Alita will escape Purgatory and ascend to Tiphares–and unlike the others, she will not do it by being chopped into body parts for Nova’s experiments.

For the mind is its own place, and can make a Heaven of Hell, and a Hell of Heaven.

Tiphares is only the beginning, just as the Scrap Yard is not the Hell we take it for.

Advertisements

Harry Potter and the Coefficient of Kinship

 

800px-coefficient_of_relatedness
Coefficient of kinship

The main character of the first 4 chapters of Harry Potter isn’t Harry: it’s the Dursleys:

Mr and Mrs Dursley, of number four, Privet Drive, were proud to say that they were perfectly normal, thank you very much. They were the last people you’d expect to be involved in anything strange or mysterious, because they just didn’t hold with such nonsense.

The Dursleys are awful and abusive in an over-the-top, Roald Dahl way that somehow manages not to cause Harry any serious emotional problems, which even I, a hard-core hereditarian, would find improbable if Harry were a real boy. But Harry isn’t the point: watching the Dursleys get their comeuppance is the point.

JRR Tolkien and JK Rowling both focused on the same group of people–common English peasants–but Tolkien’s depiction of the Hobbits are much more sympathetic than Rowling’s Muggles, even if they don’t like adventures:

This hobbit was a very well-to-do hobbit, and his name was Baggins. The Bagginses had lived in the neighborhood of The Hill for time out of mind and people considered them very respectable, not only because most of them were rich, but also because they never had any adventures or did anything unexpected: you could tell what a Baggins would say on any question without the bother of asking him.

We could wax philosophical (or political) about why Tolkien sees common folk as essentially good, despite their provinciality, and why Rowling sees them as essentially bad, for precisely the same reasons, but in the end both writers are correct, for there is good and bad in all groups.

Why are the Dursleys effective villains? Why is their buffoonish abuse believable, and why do so many people identify with young Harry? Is he not the Dursley’s kin, if not their son, their nephew? Shouldn’t they look out for him?

One of the great ironies of life is that the people who are closest to us are also the most likely to abuse us. Despite fears of “stranger danger” (or perhaps because of it) children are most likely to be harmed by parents, step-parents, guardians, or other close relatives/friends of the family, not strangers lurking in alleys or internet chatrooms.

The WHO reports: 

…there were an estimated 57 000 deaths attributed to homicide among children under 15 years of age in 2000. Global estimates of child homicide suggest that infants and very young children are at greatest risk, with rates for the 0–4-year-old age group more than double those of 5–14-year-olds…

The risk of fatal abuse for children varies according to the income level of a country and region of the world. For children under 5 years of age living in high-income countries, the rate of homicide is 2.2 per 100 000 for boys and 1.8 per 100 000 for girls. In low- to middle-income countries the rates are 2–3 times higher – 6.1 per 100 000 for boys and 5.1 per 100 000 for girls. The highest homicide rates for children under 5 years of age are found in the WHO African Region – 17.9 per 100 000 for boys and 12.7 per 100 000 for girls.

(Aside: in every single region, baby boys were more likely to be murdered than baby girls–how’s that “male privilege” for you?)

Estimates of physical abuse of children derived from population-based surveys vary considerably. A 1995 survey in the United States asked parents how they disciplined their children (12). An estimated rate of physical abuse of 49 per 1000 children was obtained from this survey when the following behaviours were included: hitting the child with an object, other than on the buttocks; kicking the child; beating the child; and threatening the child with a knife or gun. …

.In a cross-sectional survey of children in Egypt, 37% reported being beaten or tied up by their parents and 26% reported physical injuries such as fractures, loss of consciousness or permanent disability as a result of being beaten or tied up (17).
. In a recent study in the Republic of Korea, parents were questioned about their behaviour towards their children. Two-thirds of the parents reported whipping their children and 45% confirmed that they had hit, kicked or beaten them (26).
. A survey of households in Romania found that 4.6% of children reported suffering severe and frequent physical abuse, including being hit with an object, being burned or being deprived of food. Nearly half of Romanian parents admitted to beating their children ‘‘regularly’’ and 16% to beating their children with objects (34).
. In Ethiopia, 21% of urban schoolchildren and 64% of rural schoolchildren reported bruises or swellings on their bodies resulting from parental punishment (14).

Ugh. The Dursleys are looking almost decent right now.

In most ways, the Dursleys do not fit the pattern characteristic of most abuse cases–severe abuse and neglect are concentrated among drug-addicted single mothers with more children than they can feed and an unstable rotation of unrelated men in and out of the household. The Dursley’s case is far more mild, but we may still ask: why would anyone mistreat their kin? Wouldn’t natural selection–selfish genes and all that–select against such behavior?

There are a number of facile explanations for the Dursley’s behavior. The first, suggest obliquely by Rowling, is that Mrs. Dursley was jealous of her sister, Lily, Harry’s mother, for being more talented (and prettier) than she was. This is the old “they’re only bullying you because they’re jealous” canard, and it’s usually wrong. We may discard this explanation immediately, as it is simply too big a leap from “I was jealous of my sister” to “therefore I abused her orphaned child for 11 years.” Most of us endured some form of childhood hardship–including sibling rivalry–without turning into abusive assholes who lock little kids in cupboards.

The superior explanation is that there is something about Harry that they just can’t stand. He’s not like them. This is expressed in Harry’s appearance–the Dursleys are described as tall, fat, pink skinned, and blue eyed with straight, blond hair, while Harry is described as short, skinny, pale skinned, and green-eyed with wavy, dark hair.

More importantly, Harry can do magic. The Dursley’s can’t.

It’s never explained in the books why some people can do magic and not others, but the trait looks strongly like a genetic one–not much more complicated than blue eyes. Magic users normally give birth to magical children, and non-magic users (the term “muggle” is an ethnic slur and should be treated as such,) normally have non-magical children. Occasionally magical children are born to regular families, just as occasionally two brown-eyed parents have a blue-eyed child because both parents carried a recessive blue eyed gene that they both happened to pass on to their offspring, and occasionally magical parents have regular children, just as smart people sometimes have dumb offspring. On the whole, however, magical ability is stable enough across generations that there are whole magical families that have been around for hundreds of years and non-magical families that have done the same.

Any other factor–environmental, magical–could have been figured out by now and used to turn kids like Neville into competent wizards, so we conclude that such a factor does not exist.

Magic is a tricky thing to map, metaphorically, onto everyday existence, because nothing like it really exists in our world. We can vaguely imagine that Elsa hiding her ice powers is kind of like a gay person hiding the fact that they are gay, but being gay doesn’t let you build palaces or create sentient snowmen. Likewise, the Dursely’s anger at Harry being “one of them” and adamantly claiming that magic and wizardry don’t exist, despite the fact that they know very well that Mrs. Dursley’s sister could turn teacups into frogs, does resemble the habit of certain very conservative people to pretend that homosexuality doesn’t exist, or that if their children never hear that homosexuality exists, they’ll never become gay.

The other difficulty with this metaphor is that gay people, left to their own devices, don’t produce children.

But putting together these two factors, we arrive at the conclusion that wizards are a distinct, mostly endogamous ethnic group that the Dursleys react to as though they were flaming homosexuals.

How many generations of endogamy would it take to produce two genetically distinct populations from one? Not many–take, for example, the Irish Travellers:

Researchers led by the Royal College of Surgeons in Ireland (RCSI) and the University of Edinburgh analysed genetic information from 42 people who identified as Irish Travellers.

The team compared variations in their DNA code with that of 143 European Roma, 2,232 settled Irish, 2,039 British and 6,255 European or worldwide individuals. …

They found that Travellers are of Irish ancestral origin but have significant differences in their genetic make-up compared with the settled community.

These differences have arisen because of hundreds of years of isolation combined with a decreasing Traveller population, the researchers say. …

The team estimates the group began to separate from the settled population at least 360 years ago.

That’s a fair bit of separation for a mere 360 years or so–and certainly enough for your relatives to act rather funny about it if you decided to run off with Travellers and then your orphaned child turned up on their doorstep.

How old are the wizarding families? Ollivander’s Fine Wands has been in business since 382 BC, and Merlin, Agrippa, and Ptolemy are mentioned as ancient Wizards, so we can probably assume a good 2,000 years of split between the two groups, with perhaps a 10% in-migration of non-magical spouses.

Harry is, based on his parents, 50% magical and 50% non-magical, though of course both Lily and Petunia Dursley probably carry some Wizard DNA.

In The Blank Slate, Pinker has some interesting observations on the subject of sociobiology:

As the notoriety of Sociobiology grew in the ensuing years, Hamilton and Trivers, who had thought up many of the ideas, also became targets of picketers… Trivers had argued that sociobiology is, if anything a force for political progress. It is rooted in the insight that organisms did not evolve to benefit their family, group, or species, because the individuals making up those groups have genetic conflicts of interest with one another and would be selected to defend those interests. This immediately subverts the comfortable belief that those in power rule for the good of all, and it throws a spotlight on hidden actors in the social world, such as female sand the younger generation.

Further in the book, Pinker continues:

Tolstoy’s famous remark that happy families are all alike but every unhappy family is unhappy in its own way is not true at the level of ultimate (evolutionary) causation. Trivers showed how the seeds of unhappiness in every family have the same underlying source. Though relatives have common interests because of their common genes, the degree of overlap is not identical within all their permutations and combinations of family members. Parents are related to all of their offspring by an equal factor, 50 percent, but each child is related to himself or herself by a factor of 100 percent. …

Parental investment is a limited resource. A day has only twenty-four hours … At one end of the lifespan, children learnt hat a mother cannot pump out an unlimited stream of milk; at the other, they learn that parents do not leave behind infinite inheritances.

To the extent that emotions among people reflect their typical genetic relatedness, Trivers argued, the members of a family should disagree on how parental investment should be divvied up.

And to the extent that one of the children in a household is actually a mixed-ethnicity nephew and no close kin at all to the father, the genetic relationship is even more distant between Harry and the Dursleys than between most children and the people raising them.

Parents should want to split their investment equitably among the children… But each child should want the parent to dole out twice as much of the investment to himself or herself as to a sibling, because children share half their genes with each full sibling but share all their genes with themselves. Given a family with two children and one pie, each child should want to split it in a ratio of two thirds to one third, while parents should want it to be split fifty fifty.

A person normally shares about 50% of their genes with their child and 25% of their genes with a niece or nephew, but we also share a certain amount of genes just by being distantly related to each other in the same species, race, or ethnic group.

Harry is, then, somewhat less genetically similar than the average nephew, so we can expect Mrs. Dursley to split any pies a bit less than 2/3s for Dudley and 1/3 for Harry, with Mr. Dursley grumbling that Harry doesn’t deserve any pie at all because he’s not their kid. (In a more extreme environment, if the Dursleys didn’t have enough pie to go around, it would be in their interest to give all of the pie to Dudley, but the Dursleys have plenty of food and they can afford to grudgingly keep Harry alive.)

Let’s check in with E. O. Wilson’s Sociobiology:

Most kinds of social behavior, including perhaps all of the most complex forms, are based in one way or another on kinship. As a rule, the closer the genetic relationship of the members of a group, the more stable and intricate the social bonds of its members. …

Parent-offspring conflict and its obverse, sibling-sibling conflict, can be seen throughout the animal kingdom. Littermates or nestmates fight among themselves sometimes lethally, and fight with their mothers over access to milk, food, and care…. The conflict also plays out in the physiology of prenatal human development. Fetuses tap their mothers’ bloodstreams to mine the most nutrients possible from their body, while the mother’s body resists to keep it in good shape for future children. …

Trivers touted the liberatory nature of sociobiology by invoking an “underlying symmetry in our social relationships” and “submerged actors in the social world.” He was referring to women, as we will see in the chapter on gender, and to children. The theory of parent-offspring conflict says that families do not contain all-powerful, all-knowing parents and their passive, grateful children. …

Sometimes families contain Dursleys and Potters.

Most profoundly, children do not allow their personalities to be shaped by their parents’ nagging, blandishments, or attempts to serve as role models.

Quite lucky for Harry!

Quoting Trivers:

The offspring cannot rely on its parents for disinterested guidance. One expects the offspring to be preprogrammed to resist some parental manipulation while being open to other forms. When the parent imposes an arbitrary system of reinforcement (punishment and reward) in order to manipulate the offspring to act against its own best interests, selection will favor offspring that resist such schedules of reinforcement.

(Are mixed-race kids more likely to be abused than single-race kids? Well, they’re more likely to be abused than White, Asian, or Hispanic kids, but less likely to be abused than Black or Native American children [Native American children have the highest rates of abuse]. It seems likely that the important factor here isn’t degree of relatedness, but how many of your parents hail from a group with high rates of child abuse. The Dursleys are not from a group with high child abuse rates.)

Let us return to E. O. Wilson’s Sociobiology:

Mammalogists have commonly dealt with conflict as if it were a nonadaptive consequence of the rupture of the parent-offspring bond. Or, in the case of macaques, it has been interpreted as a mechanism by which the female forces the offspring into independence, a step designed ultimately to benefit both generations. …

A wholly different approach to the subject has been taken by Trivers (1974). … Trivers interprets it as the outcome of natural selection operating in opposite directions on the two generations. How is it possible for a mother and her child to be in conflict and both remain adaptive? We must remember that the two share only one half their genes by common descent. There comes a time when it is more profitable for the mother to send the older juvenile on its way and to devote her efforts exclusively to the production of a new one. To the extent that the first offspring stands a chance to achieve an independent life, the mother is likely to increase (and at most, double,) her genetic representation in the next breeding generation by such an act. But the youngster cannot be expected to view the matter in this way at all. …

If the mothers inclusive fitness suffers first from the relationship, conflict will ensue.

At some point, of course, the child is grown and therefore no longer benefits from the mother’s care; at this point the child and mother are no longer in conflict, but the roles may reverse as the parents become the ones in need of care.

As for humans:

Consider the offspring that behaves altruistically toward a full sibling. If it were the only active agent, its behavior would be selected when the benefit to the sibling exceeds two times the cost to itself. From the mother’s point of view, however, inclusive fitness is gained however the benefit to the sibling simply exceeds the cost to the altruist. Consequently, there is likely to evolve a conflict between parents and offspring in the attitudes toward siblings: the parent will encourge more altruism than the youngster is prepared to give. The converse argument also holds: the parent will tolerate less selfishness and spite among siblings than they have a tendency to display…

Indeed, Dudley is, in his way, crueler (more likely to punch Harry) and more greedy than even his parents.

Altruistic acts toward a first cousin are ordinarily selected if the benefit to the cousin exceeds 8 times the cost to the altruist, since the coefficient of relationship of first cousins is 1/8. However, the parent is related to its nieces and nephews by r=1/4, and it should prefer to see altruistic acts by its children toward their cousins whenever the benefit-to-cost ratio exceeds 2. Parental conscientiousness will also extend to interactions with unrelated individuals. From a child’s point of view, an act of selfishness or spite can provide a gain so long as its own inclusive fitness is enhanced… In human terms, the asymmetries in relationship and the differences in responses they imply will lead in evolution to an array of conflicts between parents and their children. In general, offspring will try to push their own socialization in a more egoistic fashion, while the parents will repeatedly attempt to discipline the children back to a higher level of altruism. There is a limit to the amount of altruism [healthy, normal] parents want to see; the difference is in the levels that selection causes the two generations to view as optimum.

To return to Pinker:

As if the bed weren’t crowded enough, every child of a man and a woman is also the grandchild of two other men and two other women. Parents take an interest in their children’s reproduction because in the long run it is their reproduction, too. Worse, the preciousness of female reproductive capacity makes it a valuable resource for the men who control her in traditional patriarchal societies, namely her father and brothers. They can trade a daughter or sister for additional wives or resources for themselves and thus they have an interest in protecting their investment by keeping her from becoming pregnant by men other than the ones they want to sell her to. It is not just the husband or boyfriend who takes a proprietary interest in a woman’s sexual activity, then, but also her father and brothers. Westerners were horrified by the treatment of women under the regime of the Taliban in Afghanistan from 1995 to 2001…

[ah such an optimist time Pinker wrote in]

Like many children, Harry is rescued from a bad family situation by that most modern institution, the boarding school.

The weakening of parents’ hold over their older children is also not just a recent casualty of destructive forces. It is part of a long-running expansion of freedom in the West that has granted children their always-present desire for more autonomy than parents are willing to cede. In traditional societies, children were shackled to the family’s land, betrothed in arranged marriages, and under the thumb of the family patriarch. That began to change in Medieval Europe, and some historians argue it was the first steppingstone in the expansion of rights that we associate with the Enlightenment and that culminated in the abolition of feudalism and slavery. Today it is no doubt true that some children are led astray by a bad crowd or popular culture. But some children are rescued from abusive or manipulative families by peers, neighbors, and teachers. Many children have profited from laws, such as compulsory schooling and the ban on forced marriages, that may override the preferences of their parents.

The sad truth, for Harry–and many others–is that their interests and their relatives’ interests are not always the same. Sometimes humans are greedy, self-centered, or just plain evil. Small children are completely dependent on their parents and other adults, unable to fend for themselves–so the death of his parents followed by abuse and neglect by his aunt and uncle constitute true betrayal.

But there is hope, even for an abused kid like Harry, because we live in a society that is much larger than families or tribal groups. We live in a place where honor killings aren’t common and even kids who aren’t useful to their families can find a way to be useful in the greater society. We live in a civilization.

Book Club: The 10,000 Year Explosion pt 7: Finale

 

Niels Bohr
Niels Bohr: 50% Jewish, 100% Quantum Physics

Chapter 7 of The 10,000 Year Explosion is about the evolution of high Ashkenazi IQ; chapter 8 is the Conclusion, which is just a quick summary of the whole book. (If you’re wondering if you would enjoy the book, try reading the conclusion and see if you want to know more.)

This has been an enjoyable book. As works on human evolution go, it’s light–not too long and no complicated math. Pinker’s The Blank Slate gets into much more philosophy and ethics. But it also covers a lot of interesting ground, especially if you’re new to the subject.

I have seen at least 2 people mention recently that they had plans to respond to/address Cochran and Harpending’s timeline of Jewish history/evolution in chapter 7. I don’t know enough to question the story, so I hope you’ll jump in with anything enlightening.

The basic thesis of Chapter 7 is that Ashkenazi massive over-representation in science, math, billionaires, and ideas generally is due to their massive brains, which is due in turn to selective pressure over the past thousand years or so in Germany and nearby countries to be good at jobs that require intellect. The authors quote the historian B. D. Weinryb:

More children survived to adulthood in affluent families than in less affluent ones. A number of genealogies of business leaders, prominent rabbis, community leaders, and the like–generally belonging to the more affluent classes–show that such people often had four, six, sometimes even eight or nice children who reached adulthood…

800px-Niels_Bohr_Albert_Einstein_by_Ehrenfest
Einstein and Bohr, 1925

Weinryb cites a census of the town of Brody, 1764: homeowner household had 1.2 children per adult; tenant households had only 0.6.

As evidence for this recent evolution, the authors point to the many genetic diseases that disproportionately affect Ashkenazim:

Tay-Sachs disease, Gaucher’s disease, familial dysautonomia, and two different forms of hereditary breast cancer (BRCA1 and BRCA2), and these diseases are up to 100 times more common in Ashkenazi Jews than in other European populations. …

In principle, absent some special cause, genetic diseases like these should be rare. New mutations, some of which have bad effects, appear in every generation, but those that cause death or reduced fertility should be disappearing with every generation. … one in every twenty-five Ashkenazi Jews carries a copy of the Tay-Sachs mutation, which kills homozygotes in early childhood. This is an alarming rate.

What’s so special about these diseases, and why do the Ashkenazim have so darn many of them?

Some of them look like IQ boosters, considering their effects on the development of the central nervous system. The sphingolipid mutations, in particular, have effects that could plausibly boost intelligence. In each, there is a buildup of some particular sphingolipid, a class of modified fat molecules that play a role in signal transmission and are especially common in neural tissues. Researchers have determined that elevated levels of those sphingolipids cause the growth of more connections among neurons..

There is a similar effect in Tay-Sachs disease: increased levels of a characteristic storage compound… which causes a marked increase in the growth of dendrites, the fine branches that connect neurons. …

We looked at the occupations of patients in Israel with Gaucher’s disease… These patients are much more likely to be engineers or scientists than the average Israeli Ashkenazi Jew–about eleven times more likely, in fact.

Einstein_oppenheimer
Einstein and Oppenheimer, Father of the Atomic Bomb, c. 1950

Basically, the idea is that similar to sickle cell anemia, being heterozygous for one of these traits may make you smarter–and being homozygous might make your life considerably shorter. In an environment where being a heterozygous carrier is rewarded strongly enough, the diseases will propagate–even if they incur a significant cost.

It’s a persuasive argument.

I’d like to go on a quick tangent to Von Neumann’s Wikipedia page:

Von Neumann was a child prodigy. When he was 6 years old, he could divide two 8-digit numbers in his head [14][15] and could converse in Ancient Greek. When the 6-year-old von Neumann caught his mother staring aimlessly, he asked her, “What are you calculating?”[16]

Children did not begin formal schooling in Hungary until they were ten years of age; governesses taught von Neumann, his brothers and his cousins. Max believed that knowledge of languages in addition to Hungarian was essential, so the children were tutored in English, French, German and Italian.[17] By the age of 8, von Neumann was familiar with differential and integral calculus,[18] but he was particularly interested in history. He read his way through Wilhelm Oncken‘s 46-volume Allgemeine Geschichte in Einzeldarstellungen.[19] A copy was contained in a private library Max purchased. One of the rooms in the apartment was converted into a library and reading room, with bookshelves from ceiling to floor.[20]

JohnvonNeumann-LosAlamos
Von Neumann

Von Neumann entered the Lutheran Fasori Evangélikus Gimnázium in 1911. Wigner was a year ahead of von Neumann at the Lutheran School and soon became his friend.[21] This was one of the best schools in Budapest and was part of a brilliant education system designed for the elite. Under the Hungarian system, children received all their education at the one gymnasium. Despite being run by the Lutheran Church, the school was predominately Jewish in its student body [22] The school system produced a generation noted for intellectual achievement, which included Theodore von Kármán (b. 1881), George de Hevesy (b. 1885), Leó Szilárd (b. 1898), Dennis Gabor (b. 1900), Eugene Wigner (b. 1902), Edward Teller (b. 1908), and Paul Erdős (b. 1913).[23] Collectively, they were sometimes known as “The Martians“.[24] 

One final thing in The 10,000 Year Explosion jumped out at me:

There are also reports of individuals with higher-than-average intelligence who have nonclassic congenital adrenal hyperplasia (CAH)… CAH, which causes increased exposure of the developing fetus to androgens (male sex hormones), is relatively mild compared to diseases like Tay-Sachs. At least seven studies show high IQ in CAH patients, parents, and siblings, ranging from 107 to 113. The gene frequency of CAH among the Ashkenazim is almost 20 percent.

Holy HBD, Batman, that’ll give you a feminist movement.

If you haven’t been keeping obsessive track of who’s who in the feminist movement, many of the early pioneers were Jewish women, as discussed in a recent article by the Jewish Telegraph Agency, “A History of the Radical Jewish Feminists and the one Subject they Never Talked About“:

Heather Booth, Amy Kesselman, Vivian Rothstein and Naomi Weisstein. The names of these bold and influential radical feminists may have faded in recent years, but they remain icons to students of the women’s liberation movement …

The Gang of Four, as they dubbed themselves, were among the founders of Chicago’s Women’s Liberation Union. …

Over weeks, months and years, no subject went unturned, from the political to the sexual to the personal. They were “ready to turn the world upside down,” recalled Weisstein, an influential psychologist, neuroscientist and academic who died in 2015.

But one subject never came up: the Jewish backgrounds of the majority of the group.

“We never talked about it,” Weisstein said.

Betty Friedan was Jewish; Gloria Steinem is half Jewish. There are a lot of Jewish feminists.

Of course, Jews are over-represented in pretty much every intellectual circle. Ayn Rand, Karl Marx, and Noam Chomsky are all Jewish. Einstein and Freud were Jewish. I haven’t seen anything suggesting that Jews are more over-represented in feminism than in any other intellectual circle they’re over-represented in. Perhaps they just like ideas. Someone should come up with some numbers.

Here’s a page on Congenital Adrenal Hyperplasia. The “classic” variety is often deadly, but the non-classic (the sort we are discussing here) doesn’t kill you.

Paul_Erdos_with_Terence_Tao
Paul Erdős with Terrence Tao, 1984 (Tao isn’t Jewish, of course.)

I’ve long suspected that I know so many trans people because some intersex conditions result in smarter brains (in this case, women who are better than average at math.) It looks like I may be on the right track.

Well, that’s the end of the book. I hope you enjoyed it. What did you think? And what should we read next? (I’m thinking of doing Pinker’s Blank Slate.)

Book Club: The 10,000 Year Explosion: pt 4 Agriculture

Welcome back to EvX’s Book Club. Today we’re discussing Chapter 4 of The 10,000 Year Explosion: Consequences of Agriculture.

A big one, of course, was plague–on a related note, Evidence for the Plague in Neolithic Farmers’ Teeth:

When they compared the DNA of the strain recovered from this cemetery to all published Y. pestis genomes, they found that it was the oldest (most basal) strain of the bacterium ever recovered. Using the molecular clock, they were able to estimate a timeline for the divergence and radiation of Y. pestis strains and tie these events together to make a new, testable model for the emergence and spread of this deadly human pathogen.

These analyses indicate that plague was not first spread across Europe by the massive migrations by the Yamnaya peoples from the central Eurasian steppe (around 4800 years ago)… Rascovan et al. calculated the date of the divergence of Y. pestis strains at between 6,000 and 5,000 years ago. This date implicates the mega-settlements of the Trypillia Culture as a possible origin point of Y. pestis. These mega-settlements, home to an estimated 10,000-20,000 people, were dense concentrations of people during that time period in Europe, with conditions ideal for the development of a pandemic.

The Cucuteni-Trypilia Culture flourished between the Carpathian Mountains and the Black Sea from 4800-3000 BC. It was a neolithic–that is, stone age–farming society with many large cities. Wikipedia gives a confused account of its demise:

According to some proponents of the Kurgan hypothesis of the origin of Proto-Indo-Europeans … the Cucuteni–Trypillia culture was destroyed by force. Arguing from archaeological and linguistic evidence, Gimbutas concluded that the people of the Kurgan culture (a term grouping the Yamnaya culture and its predecessors) … effectively destroyed the Cucuteni–Trypillia culture in a series of invasions undertaken during their expansion to the west. Based on this archaeological evidence Gimbutas saw distinct cultural differences between the patriarchal, warlike Kurgan culture and the more peaceful egalitarian Cucuteni–Trypillia culture, … which finally met extinction in a process visible in the progressing appearance of fortified settlements, hillforts and the graves of warrior-chieftains, as well as in the religious transformation from the matriarchy to patriarchy, in a correlated east–west movement.[26] In this, “the process of Indo-Europeanization was a cultural, not a physical, transformation and must be understood as a military victory in terms of successfully imposing a new administrative system, language, and religion upon the indigenous groups.[27]

How does it follow that the process was a cultural, not physical transformation? They got conquered.

In his 1989 book In Search of the Indo-Europeans, Irish-American archaeologist J. P. Mallory, summarising the three existing theories concerning the end of the Cucuteni–Trypillia culture, mentions that archaeological findings in the region indicate Kurgan (i.e. Yamnaya culture) settlements in the eastern part of the Cucuteni–Trypillia area, co-existing for some time with those of the Cucuteni–Trypillia.[4]Artifacts from both cultures found within each of their respective archaeological settlement sites attest to an open trade in goods for a period,[4] though he points out that the archaeological evidence clearly points to what he termed “a dark age,” its population seeking refuge in every direction except east. He cites evidence of the refugees having used caves, islands and hilltops (abandoning in the process 600–700 settlements) to argue for the possibility of a gradual transformation rather than an armed onslaught bringing about cultural extinction.[4]

How is “refugees hiding in caves” a “gradual transformation?” That sounds more like “people fleeing an invading army.”

The obvious issue with that theory is the limited common historical life-time between the Cucuteni–Trypillia (4800–3000 BC) and the Yamnaya culture (3300–2600 BC); given that the earliest archaeological findings of the Yamnaya culture are located in the VolgaDonbasin, not in the Dniester and Dnieper area where the cultures came in touch, while the Yamnaya culture came to its full extension in the Pontic steppe at the earliest around 3000 BC, the time the Cucuteni–Trypillia culture ended, thus indicating an extremely short survival after coming in contact with the Yamnaya culture.

How is that an issue? How long does Wikipedia think it takes to slaughter a city? It takes a few days. 300 years of contact is plenty for both trade and conquering.

Another contradicting indication is that the kurgans that replaced the traditional horizontal graves in the area now contain human remains of a fairly diversified skeletal type approximately ten centimetres taller on average than the previous population.[4]

What are we even contradicting? Sounds like they got conquered, slaughtered, and replaced.

Then Wikipedia suggests that maybe it was all just caused by the weather (which isn’t a terrible idea.) Drought weakened the agriculturalists and prompted the pastoralists to look for new grasslands for their herds. They invaded the agriculturalists’ areas because they were lush and good for growing grain, which the pastoralists’ cattle love eating. The already weakened agriculturalists couldn’t fight back.

ANYWAY. Lets get on with Greg and Henry’s account, The 10,000 Year Explosion:

The population expansion associated with farming increased crowding, while farming itself made people sedentary. Mountains of garbage and water supplies contaminated with human waste favored the spread of infectious disease. …

Most infectious diseases have a critical community size, a  number and concentration of people below which they cannot persist. The classic example is measles, which typically infects children and remains infectious for about ten days, after which the patient has lifelong immunity. In order for measles to survive, the virus that causes it, the paramyxovirus, must continually find unexposed victims–more children. Measles can only persist in a large, dense population: Populations that are too small or too spread out (under half a million in close proximity) fail to produce unexposed children fast enough, so the virus dies out.

Measles, bubonic plague, smallpox: all results of agriculture.

Chickenpox: not so much.

I wonder if people in the old Cucuteni–Trypillia area are particularly immune to bubonic plague, or if the successive waves of invading steppe nomads have done too much genetic replacement (slaughtering) for adaptations to stick around?

Harpending and Cochran then discuss malaria, which has had a big impact on human genomes (eg, sickle cell,) in the areas where malaria is common.

In general, the authors play it safe in the book–pointing to obvious cases of wide-scale genetic changes like sickle cell that are both undoubtable and have no obvious effect on personality or intelligence. It’s only in the chapter on Ashkenazi IQ that they touch on more controversial subjects, and then in a positive manner–it’s pleasant to think, “Why was Einstein so smart?” and less pleasant to think, “Why am I so dumb?”

However:

It’s time to address the old chestnut that biological differences among human populations are “superficial,” only skin-deep. It’s not true: We’re seeing genetically caused differences in all kinds of functions, and every such differences was important enough to cause a significant increase in fitness (number of offspring)–otherwise it wouldn’t have reached high frequency in just a few millennia.

As for skin color, Cochran and Harpending lean on the side of high-latitude lightening having been caused by agriculture, rather than mere sunlight levels:

Interestingly, the sets of changes driving light skin color in China are almost entirely different from those performing a similar function in Europe. …

Many of these changes seem to be quite recent. The mutation that appears to have the greatest effect on skin color among Europeans and neighboring peoples, a variant of SLC24A5, has spread with astonishing speed. Linkage disequilibrium… suggests that it came into existence about 5,800 years ago, but it has a frequency of 99 percent throughout Europe and is found at significant levels in North Africa, East Africa, and as far east as India and Ceylon. If it is indeed that recent, it must have had a huge selective advantage, perhaps as high as 20 percent. It would have spread so rapidly that, over a long lifetime a farmer could have noticed the change in appearance in his village.

Wow.

In humans, OAC2 … is a gene involved in the melanin pathway… Species of fish trapped in caves… lose their eyesight and become albinos over many generations. … Since we see changes in OCA2 in each [fish] case, however, there must have been some advantage in knocking out OCA2, at least in that underground environment. The advantage cannot like in increased UV absorption, since there’s no sunlight in those caves.

There are hints that knocking out OCA2, or at least reducing its activity, may he advantageous… in humans who can get away with it. We see a pattern that suggests that having one inactive copy of OCA2 is somehow favored even in some quite sunny regions. In southern Africa, a knocked-out version of OCA2 is fairly common: The gene frequency is over 1 percent.

And that’s an area with strong selection for dark skin.

A form of OCA2 albinism is common among the Navajo and other neighboring tribes, with gene frequencies as high as 4.5 percent. The same pattern appears in southern Mexico, eastern Panama, and southern Brazil. All of which suggests that heterozygotes…may ave some advantage.

Here is an article on the possibility of sexual selection for albinism among the Hopi.

So why do Europeans have such variety in eye and hair color?

Skeletons

The skeletal record clearly supports the idea that there has been rapid evolutionary change in humans over the past 10,000 years. The human skeleton has become more gracile–more lightly built–though more so in some populations than others. Our jaws have shrunk, our long bones have become lighter, and brow ridges have disappeared in most populations (with the notable exception of Australian Aborigines, who have also changed, but not as much; they still have brow ridges, and their skulls are about twice as thick as those of other peoples.)

This could be related to the high rates of interpersonal violence common in Australia until recently (thicker skulls are harder to break) or a result of interbreeding with Neanderthals and Denisovans. We don’t know what Denisovans looked like, but Neanderthals certainly are noted for their robust skulls.

Skull volume has decreased, apparently in all populations: In Europeans, volume is down about 10 percent from the high point about 20,000 years ago.

This seems like a bad thing. Except for mothers.

Some changes can be seen even over the past 1,000 years. English researchers recently compared skulls from people who died in the Black Death ([approximately] 650 years ago), from the crew of the Mary Rose,a  ship that sank in Tudor times ([approximately] 450 years ago) and from our contemporaries. The shape of the skull changed noticeably over that brief period–which is particularly interesting because we know there has been no massive population replacement in England over the past 700 years.

Hasn’t there been a general replacement of the lower classes by the upper classes? I think there was also a massive out-migration of English to other continents in the past five hundred years.

The height of the cranial vault of our contemporaries was about 15 percent larger than that of the earlier populations, and the part of the skull containing the frontal lobes was thus larger.

This is awkwardly phrased–I think the authors want the present tense–“the cranial vault of our contemporaries is…” Nevertheless, it’s an interesting study. (The frontal lobes control things like planning, language, and math.) 

We then proceed to the rather depressing Malthus section and the similar “elites massively out-breeding commoners due to war or taxation” section. You’re probably familiar with Genghis Khan by now. 

We’ve said that the top dogs usually had higher-than-average fertility, which is true, but there have been important exceptions… The most common mistake must have been living in cities, which have almost always been population sinks, mostly because of infectious disease. 

They’re still population sinks. Just look at Singapore. Or Tokyo. Or London. 

The case of silphium, a natural contraceptive and abortifacient eaten to extinction during the Classical era, bears an interesting parallel to our own society’s falling fertility rates. 

And of course, states domesticate their people: 

Farmers don’t benefit from competition between their domesticated animals or plants… Since the elites were in a very real sense raising peasants, just as peasants raised cows, there must have been a tendency for them to cull individuals who were more aggressive than average, which over time would have changed the frequencies of those alleles that induced such aggressiveness.

On the one hand, this is a very logical argument. On the other hand, it seems like people can turn on or off aggression to a certain degree–uber peaceful Japan was rampaging through China only 75 years ago, after all. 

Have humans been domesticated? 

(Note: the Indians captured by the Puritans during the Pequot War may have refused to endure the yoke, but they did practice agriculture–they raised corn, squash and beans, in typical style. Still, they probably had not endured under organized states for as long as the Puritans.)

There is then a fascinating discussion of the origins of the scientific revolution–an event I am rather fond of. 

Although we do not as yet fully understand the true causes of the scientific and industrial revolution, we must now consider the possibility that continuing human evolution contributed to that process. It could explain some of the odd historical patterns that we see.

Well, that’s enough for today. Let’s continue with Chapter 5 next week.

How about you? What are your thoughts on the book?

Book Club: The 10,000 Year Explosion pt. 2: Behavioral Modernity 

 

e3fa487b36f43641fc86d1fbe40665b4_preview_featured
Neanderthal skull

Welcome back to EvX’s book club. Today we’re discussing Cochran and Harpending’s The 10,000 Year Explosion: How Civilization Accelerated Human Evolution, chapter 2: The Neanderthal Within.

How did you like the chapter?

Unless I have missed a paper somewhere, this is a remarkable chapter, for The 10,000 Year Explosion was published in 2009, and the first Neanderthal genome showing more overlap with Europeans (and Asians) than Sub-Saharans was published in 2010. Greg and Henry did know of genetic evidence that humans have about 5% admixture from some archaic sister-species, but no one yet had evidence of which species, nor was there popular agreement on the subject. Many scientists still rejected the notion of Sapiens-Neanderthal interbreeding when Cochran and Harpending laid out their bold claim that not only had it happened, but it was a critical moment in human history, jump-starting the cultural cultural effervescence known as behavioral modernity.

Homo sapiens have been around for 300,000 years–give or take a hundred thousand–but for most of that time, we left behind rather few interesting artifacts. As the authors point out, we failed to develop agriculture during the Eemian interglacial (though we managed to develop agriculture at least 7 times, independently, during the current interglacial). Homo sapiens attempted to leave Africa several times before 70,000 years ago, but failed each time, either because they weren’t clever enough to survive in their new environment or couldn’t compete with existing hominins (ie, Neanderthals) in the area.

DBoAOVxWsAAYu6jSapiens’ technology didn’t do much interesting for the first couple hundred thousand years, either. Yet 70,000 years ago, sapiens did manage to leave Africa, displace the Neanderthals, spread into radically new climates, developed long distance trade and art, and eventually agriculture and everything we now enjoy here in the modern world.

According to Wikipedia, behavioral modernity includes:

Burial, fishing, art, self-decoration via jewelry or pigment, bone tools, sharp blades, hearths, multi-part tools, long-distance transportation of important items, and regionally distinct artifacts.

This leaves two important questions re: Cochran and Harpending’s theory. First, when exactly did behavioral modernity emerge, and second, was it a gradual transition or a sudden explosion?

Prehistoric art is tricky to date–and obviously did not always get preserved–but Blombos Cave, South Africa, currently contains our earliest piece, from about 70,000-100,000 years ago. The Blombos art is not figurative–it’s patterns of crosshatched lines–but there’s a fair amount of it. Blombos appears to have been an ochre-processing spot (the art is made with or on pieces of ochre) littered with thousands of leftover scraps. According to Wikipedia:

In 2008 an ochre processing workshop consisting of two toolkits was uncovered in the 100,000-year-old levels at Blombos Cave, South Africa.[3] Analysis shows that a liquefied pigment-rich mixture was produced and stored in the shells of two Haliotis midae (abalone), and that ochre, bone, charcoal, grindstones and hammer-stones also formed a composite part of the toolkits. As both toolkits were left in situ, and as there are few other archaeological remains in the same layer, it seems the site was used primarily as a workshop and was abandoned shortly after the pigment-rich compounds were made. Dune sand then blew into the cave from the outside, encapsulated the toolkits and by happenstance ensured their preservation before the next occupants arrived, possibly several decades or centuries later.

The application or use of the compound is not self-evident. No resins or wax were detected that might indicate it was an adhesive for hafting.

70 beads made from shells with holes drilled in them have also been found at Blombos.

Blombos is interesting, but the “art” is not actually very good–and we can’t say for sure that it was meant as art at all. Maybe the locals were just scraping the rocks to get the ochre off, for whatever purposes.

Indisputable art emerges a little later, around 40,000 years ago–simultaneously, it appears, in Europe, Asia, Australia, and Indonesia. The archaeology of Africa is less well-documented (in part because things just disintegrate quickly in some areas), but the earliest known sub-Saharan figurative art is about 26,000 years old. This art is both more advanced (it actually looks like art) and more abundant than its predecessors–the Sungir burial, dated to around 30,000-34,000 BC, for example, contains over 13,000 beads–a stark contrast to Blombos’s 70.

722px-homo_sapiens_lineage-svgIf a specific event triggered the simultaneous development of figurative art–and other aspects of behavioral modernity–in four different parts of the world, that event would logically have occurred before those groups split up. The timing of our interbreeding with Neanderthals–“In Eurasia, interbreeding between Neanderthals and Denisovans  with modern humans took place several times between about 100,000 and 40,000 years ago, both before and after the recent out-of-Africa migration 70,000 years ago”–is therefore temporaly perfect.

Subsequent back-migration could have then carried the relevant Neanderthal genomes into Africa–for regardless of where or how behavioral modernity started, all humans now have it.

So what do you think? Did we talk the Neanderthals to death? Did we get the gene for talking from the Neanderthals? Did we out-think them? Or did we just carry some disease or parasite that wiped them out? Or did they wipe themselves out via maternal death in labor, due to their enormous skulls?

(As for FOXP2, it appears that the version found in humans and Neanderthals is slightly different, so I find it a little doubtful that we got it from them.)

A couple of interesting quotes:

In several places, most clearly in central and southwestern France and part of northern Spain, we find a tool tradition that lasted from about 35,000 to 28,000 years ago (the Chatelperronian) that appears to combine some of the techniques of the Neanderthals … with those of modern humans. … Most important, there are several skeletons clearly associated with the Chatelperronian industry, and all are Neanderthal. This strongly suggests that there were interactions between the populations, enough that the Neanderthals learned some useful techniques from modern humans.

The smoking gene?

P. D. Evans and his colleagues at the University of Chicago looked at microcephalin (MCPH1), a very unusual gene that regulates brain size. They found that most people today carry a version that is quite uniform, suggesting that it originated recently. At the same time, it is very different from other, more varied versions found  at the same locus in humans today, all of which have many single-nucleotide differences among them. More than that, when there are several different versions of a gene at some locus, we normally find some intermediate versions created by recombination, that is, by chromosomes occasionally breaking and recombining. In the case of the unusual gene (called D for “derived”) at the microcephalin locus, such recombinants are very rare: It is as if the common, highly uniform version of microcephalin simply hasn’t been in the human race all that long in spite of the high frequency of the new version in many human populations. The researchers estimated that it appeared about 37,000 years ago (plus or minus a few tens of thousands of years.) And if it did show up then, Neanderthals are a reasonable, indeed likely, source.

So far as I know (and I looked it up a few weeks ago) no one has yet found microcephalin D in Neanderthals–and the date of 37,000 years ago sounds a bit too recent. However, we haven’t actually genotyped that many Neanderthals (it’s hard to find good 40,000 year old DNA), so we might just not have found it yet–and the date might simply be wrong.

It’s a remarkable genetic finding, even if it didn’t involve Neanderthals–and it might be simpler to dispense with other standards and define Homo sapiens as starting at this point.

On a related note, here’s a bit from Wikipedia about the ASPM gene:

A new allele (version) of ASPM appeared sometime between 14,100 and 500 years ago with a mean estimate of 5,800 years ago. The new allele has a frequency of about 50% in populations of the Middle East and Europe, it is less frequent in East Asia, and has low frequencies among Sub-Saharan African populations.[12] It is also found with an unusually high percentage among the people of Papua New Guinea, with a 59.4% occurrence.[13]

The mean estimated age of the ASPM allele of 5,800 years ago, roughly correlates with the development of written language, spread of agriculture and development of cities.[14] Currently, two alleles of this gene exist: the older (pre-5,800 years ago) and the newer (post-5,800 years ago). About 10% of humans have two copies of the new ASPM allele, while about 50% have two copies of the old allele. The other 40% of humans have one copy of each. Of those with an instance of the new allele, 50% of them are an identical copy.[15] The allele affects genotype over a large (62 kbp) region, a so called selective sweep which signals a rapid spread of a mutation (such as the new ASPM) through the population; this indicates that the mutation is somehow advantageous to the individual.[13][16]

Testing the IQ of those with and without new ASPM allele has shown no difference in average IQ, providing no evidence to support the notion that the gene increases intelligence.[16][17][18] However statistical analysis has shown that the older forms of the gene are found more heavily in populations that speak tonal languages like Chinese or many Sub-Saharan African languages.[19]

We still have so much to discover.

Book Club: The 10,000 Year Explosion pt 1

 

5172bf1dp2bnl-_sx323_bo1204203200_For most of he last century, the received wisdom in the social sciences has been that human evolution stopped a long time ago–in the most up-to-date version, before modern humans expanded out of Africa some 50,000 years ago. This implies that human minds must be the same everywhere–the “psychic unity of mankind.” It would certainly make life simpler if it were true.

Thus Cochran and Harpending fire the opening salvo of  The 10,000 Year Explosion: how Civilization Accelerated Human Evolution. (If you haven’t finished the book yet, don’t worry–we’ll discuss one chapter a week, so you have plenty of time.)

The book’s main thesis–as you can guess by reading the title–is that human evolution did not halt back in the stone age, but has accelerated since then.

I’ve been reading Greg and Henry’s blog for years (now Greg’s blog, since Henry sadly passed away.) If you’re a fan of the blog, you’ll like the book, but if you follow all of the latest human genetics religiously, you might find the book a bit redundant. Still, it is nice to have many threads tied together in one place–and in Greg & Henry’s entertaining style. (I am about halfway through the book as of this post, and so far, it has held up extremely well over the years since it was published.)

Chapter One: Conventional Wisdom explains some of the background science and history necessary to understand the book. Don’t worry, it’s not complicated (though it probably helps if you’ve seen this before.)

A lot of of our work could be called “genetic history.” … This means that when a state hires foreign mercenaries, we are interested in their numbers, their geographic origin, and the extent to which they settled down and mixed with the local population. We don’t much care whether they won their battles, as long as they survived and bred. …

For an anthropologist it might be important to look at how farmers in a certain region and time period lived; for us, as genetic historians, the interesting thing is how natural selection allowed agriculture to come about to begin with, and how the pressures of an agricultural lifestyle allowed changes in the population’s genetic makeup to take root and spread.

One of the things I find fascinating about humans is that the agricultural revolution happened more or less independently in 11 different places, all around 10,000 years ago. There’s a little variation due to local conditions and we can’t be positive that the Indus Valley didn’t have some influence on Mesopotamia and vice versa, but this is a remarkable convergence. Homo sapiens are estimated to have been around for about 200-300,000 years, (and we were predated by a couple million years of other human ancestor-species like Homo erectus)  but for the first 280,000 years or so of our existence no one bothered to invent agriculture. Then in the span of a few thousand years, suddenly it popped up all over the darn place, even in peoples like the Native Americans who were completely isolated from developments over in Asia and Africa.

This suggests to me that some process was going on simultaneously in all of these human populations–a process that probably began back when these groups were united and then progressed at about the same speed, culminating in the adoption of agriculture.

One possibility is simply that humans were hunting the local large game, and about 10,000 years ago, they started running out. An unfortunate climactic event could have pushed people over the edge, reducing them from eating large, meaty animals to scrounging for grass and tubers.

Another possibility is that human migrations–particularly the Out of Africa Event, but even internal African migrations could be sufficient–caused people to become smarter as they encountered new environments, which allowed them to make the cognitive leap from merely gathering food to tending food.

A third possibility, which we will discuss in depth next week, is that interbreeding with Neanderthals and other archaic species introduced new cognitive features to humanity.

And a fourth, related possibility is that humans, for some reason, suddenly developed language and thus the ability to form larger, more complex societies with a division of labor, trade, communication, and eventually agriculture and civilization.

We don’t really know when language evolved, since the process left behind neither bones nor artifacts, but if it happened suddenly (rather than gradually) and within the past 300,000 years or so, I would mark this as the moment Homo sapiens evolved.

While many animals can understand a fair amount of language (dogs, for instance) and some can even speak (parrots,) the full linguistic range of even the most intelligent apes and parrots is still only comparable to a human toddler. The difference between human language abilities and all other animals is stark.

There is great physical variation in modern humans, from Pygmies to Danes, yet we can all talk–even deaf people who have never been taught sign language seek to communicate and invent their own sign language more complex and extensive than that of the most highly trained chimps. Yet if I encountered a group of “humans” that looked just like some of us but fundamentally could not talk, could not communicate or understand language any more than Kanzi the Bonobo, I could not count them members of my species. Language is fundamental.

But just because we can all speak, that does not mean we are all identical in other mental ways–as you well know if you have ever encountered someone who is inexplicably wrong about EVERYTHING here on the internet.

But back to the book:

We intend to make the case that human evolution has accelerated int he past 10,000 years, rather than slowing or stopping, and is now happening about 100 times faster than its long term average over the 6 million years of our existence.

A tall order!

220px-San_tribesman
Some anthropologists refer to Bushmen as “gracile,” which means they are a little shorter than average Europeans and not stockily built

To summarize Cochran and Harpending’s argument: Evolution is static when a species has already achieved a locally-optimal fit with its environment, and the environment is fairly static.

Human environments, however, have not been static for the past 70,000 years or so–they have changed radically. Humans moved from the equator to the polar circle, scattered across deserts and Polynesian islands, adapting to changes in light, temperature, disease, and food along the way.

The authors make a fascinating observation about hunting strategies and body types:

…when humans hunted big game 100,000 years ago, they relied on close-in attacks with thrusting spears. Such attacks were highly dangerous and physically taxing, so in those days, hunters had to be heavily muscled and have thick bones. That kind of body had its disadvantages–if nothing else, it required more food–but on the whole, it was the best solution in that situation. … but new weapons like the atlatl (a spearthrower) and the bow effectively stored muscle-generated energy, which meant that hunters could kill big game without big biceps and robust skeletons. Once that happened, lightly built people, who were better runners and did not need as much food, became competitively superior. The Bushmen of southern Africa…are a small, tough, lean people, less than five feet tall. It seems likely that the tools made the man–the bow begat the Bushmen.

Cro-magnons (now called “European Early Modern Humans” by people who can’t stand a good name,) were of course quite robust, much more so than the gracile Bushmen (Aka San.) Cro-magnons were not unique in their robustness–in fact all of our early human ancestors seem to have been fairly robust, including the species we descended from, such as Homo heidelbergensis and Homo ergaster. (The debate surrounding where the exact lines between human species should be drawn is long and there are no definite answers because we don’t have enough bones.)

We moderns–all of us, not just the Bushmen–significantly less robust than our ancestors. Quoting from a review of Manthropology: The Science of the Inadequate Modern Male:

Twenty thousand years ago six male Australian Aborigines chasing prey left footprints in a muddy lake shore that became fossilized. Analysis of the footprints shows one of them was running at 37 kph (23 mph), only 5 kph slower than Usain Bolt was traveling at when he ran the 100 meters in world record time of 9.69 seconds in Beijing last year. But Bolt had been the recipient of modern training, and had the benefits of spiked running shoes and a rubberized track, whereas the Aboriginal man was running barefoot in soft mud. …

McAllister also presents as evidence of his thesis photographs taken by a German anthropologist early in the twentieth century. The photographs showed Tutsi initiation ceremonies in which young men had to jump their own height in order to be accepted as men. Some of them jumped as high as 2.52 meters, which is higher than the current world record of 2.45 meters. …

Other examples in the book are rowers of the massive trireme warships in ancient Athens who far exceeded the capabilities of modern rowers, Roman soldiers who completed the equivalent of one and a half marathons a day, carrying equipment weighing half their body weight …

McAllister attributes the decline to the more sedentary lifestyle humans have lived since the industrial revolution, which has made modern people less robust than before since machines do so much of the work. …

According to McAllister humans have lost 40 percent of the shafts of the long bones because they are no longer subjected to the kind of muscular loads that were normal before the industrial revolution. Even our elite athletes are not exposed to anywhere near the challenges and loads that were part of everyday life for pre-industrial people.

Long story short: humans are still evolving. We are not static; our bodies do not look like they did 100,000 years ago, 50,000 years ago, nor even 1,000 years ago. The idea that humans could not have undergone significant evolution in 50–100,000 years is simply wrong–dogs evolved from wolves in a shorter time.

Dogs are an interesting case, for despite their wide variety of physical forms, from Chihuahuas to Great Danes, from pugs to huskies, we class them all as dogs because they all behave as dogs. Dogs can interbreed with with wolves and coyotes (and wolves and coyotes with each other,) and huskies look much more like wolves than like beagles, but they still behave like dogs.

The typical border collie can learn a new command after 5 repetitions and responds correctly 95% of the time, whereas a basset hound takes 80-100 repetitions to achieve a 25 percent accuracy rate.

I understand why border collies are smart, but why are bassets so stupid?

Henry and Greg’s main argument depends on two basic facts: First, the speed of evolution–does evolution work fast enough to have caused any significant changes in human populations since we left Africa?

How fast evolution works depends on the pressure, of course. If everyone over 5 feet tall died tomorrow, the next generation of humans would be much shorter than the current one–and so would their children.

The end of the Ice Age also brought about a global rise in sea level. … As the waters rose, some mountains became islands.. These islands were too small to sustain populations of large predators, and in their absence the payoff for being huge disappeared. … Over a mere 5,000 years, elephants shrank dramatically, from an original height of 12 feet to as little as 3 feet.  It is worth noting that elephant generations are roughly twenty years long, similar to those of humans.

We have, in fact, many cases of evolution happening over a relatively short period, from dogs to corn to human skin tone.

No one is arguing about the evolution of something major, like a new limb or an extra spleen–just the sorts of small changes to the genome that can have big effects, like the minor genetic differences that spell the difference between a wolf and a poodle.

Second, human populations need to be sufficiently distinct–that is, isolated–for traits to be meaningfully different in different places. Of course, we can see that people look different in different places. This alone is enough to prove the point–people in Japan have been sufficiently isolated from people in Iceland that genetic changes affecting appearance haven’t spread from one population to the other.

What about the claim that “There’s more variation within races than between them”?

This is an interesting, non-intuitive claim. It is true–but it is also true for humans and chimps, dogs and wolves. That is, there is more variation within humans than between humans and chimps–a clue that this factoid may not be very meaningful.

Let’s let the authors explain:

Approximately 85 percent of human genetic variation is within-group rather than between groups, while 15 percent is between groups. … genetic variation is distributed in a similar way in dogs: 70 percent of genetic variation is within-breed, while 30 percent is between-breed. …

Information about the distribution of genetic variation tells you essentially nothing about the size or significance of trait differences. The actual differences we observe in height, weight, strength, speed, skin color, and so on are real: it is not possible to argue them away. …

It turns out that the correlations between these genetic differences matter. … consider malaria resistance in northern Europeans and central Africans. Someone from Nigeria may ave the sickle-cell mutation (a known defense against falciparum malaria,) while hardly anyone from northern Europe does, but even the majority of Nigerians who don’t carry the sickle cell are far more resistant to malaria than any Swede. They have malaria-defense versions of many genes. That is the typical pattern you get from natural selection–correlated changes in a population, change in the same general direction, all a response to the same selection pressure.

In other words: suppose a population splits and goes in two different directions. Population A encounters no malaria, and so develops no malria-resistant genes. Population B encounters malaria and quickly develops a hundred different mutations that all resist malaria. If some members of Population B have the at least some of the null variations found in Population A, then there’s very little variation between Pop A and B–all of Pop A’s variants are in fact found in Pop B. Meanwhile, there’s a great deal of variation within Pop B, which has developed 100 different ways to resist malaria. Yet the genetic differences between those populations is very important, especially if you’re in an area with malaria.

What if the differences between groups is just genetic drift?

Most or all of the alleles that are responsible for obvious differences in appearance between populations–such as the gene variants causing light skin color or blue eyes–have undergone strong selection. In these cases, a “big effect” on fitness means anything from a 2 or 3 percent increase on up. Judging from the rate at which new alleles have increased in frequency, this must be the case for genes that determine skin color (SLC24A5), eye color (HERC2), lactose tolerance (LCT), and dry earwax (ABCC11), of all things.

maps-europelighteyesIn fact, modern phenotypes are surprisingly young–blond hair, white skin, and blue eyes all evolved around a mere 10,000 years ago–maybe less. For these traits to have spread as far as they have, so quickly, they either confer some important evolutionary benefit or happen to occur in people who have some other evolutionarily useful trait, like lactose tolerance:

Lactose-tolerant Europeans carry a particular mutation that is only a few thousand years old, and so those Europeans also carry much of the original haplotype. In fact, the shared haplotype around that mutation is over 1 million bases long.

Recent studies have found hundreds of cases of long haplotypeles indicating recent selection: some have reached 100 percent frequency, more have intermediate frequencies, and most are regional. Many are very recent: The rate of origination peaks at around 5,500 years ago in the European and Chinese samples, and at about 8,500 years ago in the African sample.

(Note that the map of blue eyes and the map of lactose tolerance do not exactly correlate–the Baltic is a blue eyes hotspot, but not particularly a lactose hotspot–perhaps because hunter-gatherers hung on longer here by exploiting rich fishing spots.)

Could these explosions at a particular date be the genetic signatures of large conquering events? 5,5000 years ago is about right for the Indo-European expansion (perhaps some similar expansion happened in the East at the same time.) 8,000 years ago seems too early to have contributed to the Bantu Expansion–did someone else conquer west Africa around 8,500 years ago?

Let’s finish up:

Since we have sequenced the chimpanzee genome, we know the size of the genetic difference between chimps and humans. Since we also have decent estimates of the length of time since the two species split, we know the long-term rate of genetic change. The rate of change over the past few thousand years is far greater than this long-term rate over the past few million years, on the order of 100 times greater. …

The ultimate cause of this accelerated evolution was the set of genetic changes that led to an increased ability to innovate. …

Every major innovation led to new selective pressures, which led to more evolutionary change, and the most spectacular of those innovations was the development of agriculture.

Innovation itself has increased dramatically. The Stone Age lasted roughly 3.4 million years (you’ll probably note that this is longer than Homo sapiens has been around.) The most primitive stone tradition, the Oldowan, lasted for nearly 3 million of those 3.4; the next period, the Acheulean, lasted for about 1.5 million years. (There is some overlap in tool traditions.) By contrast, the age of metals–bronze, copper, iron, etc–has been going on for a measly 5,500 years, modern industrial society is only a couple of centuries old–at most.

What triggered this shift from 3 million years of shitty stone tools with nary an innovation in sight to a society that split the atom and put a man on the moon? And once culture was in place, what traits did it select–and what traits are we selecting for right now?

Is the singularity yet to come, or did we hit it 10,000 years ago–or before?

 

By the way, if you haven’t started the book yet, I encourage you to go ahead–you’ve plenty of time before next week to catch up.

Book Club: How to Create a Mind, pt 2/2

Ray Kurzweil, writer, inventor, thinker

Welcome back to EvX’s Book Club. Today  are finishing Ray Kurzweil’s How to Create a Mind: The Secret of Human thought Revealed.

Spiders are interesting, but Kurzweil’s focus is computers, like Watson, which trounced the competition on Jeopardy!

I’ll let Wikipedia summarize Watson:

Watson was created as a question answering (QA) computing system that IBM built to apply advanced natural language processing, information retrieval, knowledge representation, automated reasoning, and machine learning technologies to the field of open domain question answering.[2]

The sources of information for Watson include encyclopedias, dictionaries, thesauri, newswire articles, and literary works. Watson also used databases, taxonomies, and ontologies. …

Watson parses questions into different keywords and sentence fragments in order to find statistically related phrases.[22] Watson’s main innovation was not in the creation of a new algorithm for this operation but rather its ability to quickly execute hundreds of proven language analysis algorithms simultaneously.[22][24] The more algorithms that find the same answer independently the more likely Watson is to be correct.[22] Once Watson has a small number of potential solutions, it is able to check against its database to ascertain whether the solution makes sense or not.[22]

Kurzweil opines:

That is at least one reason why Watson represents such a significant milestone: Jeopardy! is precisely such a challenging language task. … What is perhaps not evident to many observers is that Watson not only had to master the language in the unexpected and convoluted queries, but for the most part its knowledge was not hand-coded. It obtained that knowledge by actually reading 200 million pages of natural-language documents, including all of Wikipedia… If Watson can understand and respond to questions based on 200 million pages–in three seconds!–here is nothing to stop similar systems from reading the other billions of documents on the Web. Indeed, that effort is now under way.

A point about the history of computing that may be petty of me to emphasize:

Babbage’s conception is quite miraculous when you consider the era in which he lived and worked. However, by the mid-twentieth century, his ideas had been lost in the mists of time (although they were subsequently rediscovered.) It was von Neumann who conceptualized and articulated the key principles of the computer as we know it today, and the world recognizes this by continuing to refer to the von Neumann machine as the principal model of computation. Keep in mind, though, that the von Neumann machine continually communicates data between its various units and within those units, so it could not be built without Shannon’s theorems and the methods he devised for transmitting and storing reliable digital information. …

You know what? No, it’s not petty.

Amazon lists 57 books about Ada Lovelace aimed at children, 14 about Alan Turing, and ZERO about John von Neumann.

(Some of these results are always irrelevant, but they are roughly correct.)

“EvX,” you may be saying, “Why are you counting children’s books?”

Because children are our future, and the books that get published for children show what society deems important for children to learn–and will have an effect on what adults eventually know.

I don’t want to demean Ada Lovelace’s role in the development of software, but surely von Neumann’s contributions to the field are worth a single book!

*Slides soapbox back under the table*

Anyway, back to Kurzweil, now discussing quantum mechanics:

There are two ways to view the questions we have been considering–converse Western an Easter perspective on the nature of consciousness and of reality. In the Western perspective, we start with a physical world that evolves patterns of information. After a few billion years of evolution, the entities in that world have evolved sufficiently to become conscious beings In the Eastern view, consciousness is the fundamental reality, the physical word only come into existence through the thoughts of conscious beings. …

The East-West divide on the issue of consciousness has also found expression in opposing schools of thought in the field of subatomic physics. In quantum mechanics, particles exist in what are called probability fields. Any measurement carried out on them by a measuring device causes what is called a collapse of the wave function, meaning that the particle suddenly assumes a particular location. A popular view is that such a measurement constitutes observation by a conscious observer… Thus the particle assume a particular location … only when it is observed. Basically particles figure that if no one is bothering to look at them, they don’t need to decide where they are. I call this the Buddhist school of quantum mechanics …

Niels Bohr

Or as Niels Bohr put it, “A physicist is just an atom’s way of looking at itself.” He also claimed that we could describe electrons exercised free will in choosing their positions, a statement I do not think he meant literally; “We must be clear that when it comes to atoms, language can be used only as in poetry,” as he put it.

Kurzweil explains the Western interpretation of quantum mechanics:

There is another interpretation of quantum mechanics… In this analysis, the field representing a particle is not a probability field, but rather just a function that has different values in different locations. The field, therefore, is fundamentally what the particle is. … The so-called collapse of the wave function, this view holds, is not a collapse at all. … It is just that a measurement device is also made up of particles with fields, and the interaction of the particle field being measured and the particle fields of the measuring device result in a reading of the particle being in a particular location. The field, however, is still present. This is the Western interpretation of quantum mechanics, although it is interesting to note that the more popular view among physicists worldwide is what I have called the Eastern interpretation.

Soviet atomic bomb, 1951

For example, Bohr has the yin-yang symbol on his coat of arms, along with the motto contraria sunt complementa, or contraries are complementary. Oppenheimer was such a fan of the Bhagavad Gita that he read it in Sanskrit and quoted it upon successful completion of the Trinity Test, “If the radiance of a thousand suns were to burst at once into the sky, that would be like the splendor of the mighty one,” and “Now I am become death, the destroyer of worlds.” He credited the Gita as one of the most important books in his life.

Why the appeal of Eastern philosophy? Is it something about physicists and mathematicians? Leibnitz, after all, was fond of the I Ching. As Wikipedia says:

Leibniz was perhaps the first major European intellectual to take a close interest in Chinese civilization, which he knew by corresponding with, and reading other works by, European Christian missionaries posted in China. Having read Confucius Sinarum Philosophus on the first year of its publication,[153] he concluded that Europeans could learn much from the Confucian ethical tradition. He mulled over the possibility that the Chinese characters were an unwitting form of his universal characteristic. He noted with fascination how the I Ching hexagrams correspond to the binary numbers from 000000 to 111111, and concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophical mathematics he admired.[154] Leibniz communicated his ideas of the binary system representing Christianity to the Emperor of China hoping it would convert him.[84] Leibniz may be the only major Western philosopher who attempted to accommodate Confucian ideas to prevailing European beliefs.[155]

Leibniz’s attraction to Chinese philosophy originates from his perception that Chinese philosophy was similar to his own.[153] The historian E.R. Hughes suggests that Leibniz’s ideas of “simple substance” and “pre-established harmony” were directly influenced by Confucianism, pointing to the fact that they were conceived during the period that he was reading Confucius Sinarum Philosophus.[153]

Perhaps it is just that physicists and mathematicians are naturally curious people, and Eastern philosophy is novel to a Westerner, or perhaps by adopting Eastern ideas, they were able to purge their minds of earlier theories of how the universe works, creating a blank space in which to evaluate new data without being biased by old conceptions–or perhaps it is just something about the way their minds work.

As for quantum, I favor the de Broglie-Bohm interpretation of quantum mechanics, but obviously I am not a physicist and my opinion doesn’t count for much. What do you think?

But back to the book. If you are fond of philosophical ruminations on the nature of consciousness, like “What if someone who could only see in black and white read extensively about color “red,” could they ever achieve the qualia of actually seeing the color red?” or “What if a man were locked in a room with a perfect Chinese rulebook that told him which Chinese characters to write in response to any set of characters written on notes passed under the door? The responses are be in perfect Chinese, but the man himself understands not a word of Chinese,” then you’ll enjoy the discussion. If you already covered all of this back in Philosophy 101, you might find it a bit redundant.

Kurzweil notes that conditions have improved massively over the past century for almost everyone on earth, but people are increasingly anxious:

A primary reason people believe life is getting worse is because our information about the problems of the world has steadily improved. If there is a battle today somewhere on the planet, we experience it almost as if we were there. During World War II, tens of thousand of people might perish in a battle, and if the public could see it at all it was in a grainy newsreel in a movie theater weeks later. During World War I a small elite could read about the progress of the conflict in the newspaper (without pictures.) During the nineteenth century there was almost no access to news in a timely fashion for anyone.

As for the future of man, machines, and code, Kurzweil is even more optimistic than Auerswald:

The last invention that biological evolution needed to make–the neocortex–is inevitably leading to the last invention that humanity needs to make–truly intelligent machines–and the design of one is inspiring the other. … by the end of this century we will be able to create computation at the limits of what is possible, based on the laws of physics… We call matter and energy organized in this way “computronium” which is vastly more powerful pound per pound than the human brain. It will not jut be raw computation but will be infused with intelligent algorithms constituting all of human-machine knowledge. Over time we will convert much of the mass and energy in our tiny corner of the galaxy that is suitable for this purpose to computronium. … we will need to speed out to the rest of the galaxy and universe. …

How long will it take for us to spread our intelligence in its nonbiological form throughout the universe? … waking up the universe, and then intelligently deciding its fate by infusing it with our human intelligence in its nonbiological form, is our destiny.

Whew! That is quite the ending–and with that, so will we. I hope you enjoyed the book. What did you think of it? Will Humanity 2.0 be good? Bad? Totally different? Or does the Fermi Paradox imply that Kurzweil is wrong? Did you like this shorter Book Club format? And do you have any ideas for our next Book Club pick?

Book Club: How to Create a Mind by Ray Kurzweil pt 1/2

Welcome to our discussion of Ray Kurzweil’s How to Create a Mind: The Secret of Human thought Revealed. This book was requested by one my fine readers; I hope you have enjoyed it.

If you aren’t familiar with Ray Kurzweil (you must be new to the internet), he is a computer scientist, inventor, and futurist whose work focuses primarily on artificial intelligence and phrases like “technological singularity.”

Wikipedia really likes him.

The book is part neuroscience, part explanations of how various AI programs work. Kurzweil uses models of how the brain works to enhance his pattern-recognition programs, and evidence from what works in AI programs to build support for theories on how the brain works.

The book delves into questions like “What is consciousness?” and “Could we recognize a sentient machine if we met one?” along with a brief history of computing and AI research.

My core thesis, which I call the Law of Accelerating Returns, (LOAR), is that fundamental measures of of information technology follow predictable and exponential trajectories…

I found this an interesting sequel to Auerswald’s The Code Economy and counterpart to Gazzaniga’s Who’s In Charge? Free Will and the Science of the Brain, which I listened to in audiobook form and therefore cannot quote very easily. Nevertheless, it’s a good book and I recommend it if you want more on brains.

The quintessential example of the law of accelerating returns is the perfectly smooth, doubly exponential growth of the price/performance of computation, which has held steady for 110 years through two world was, the Great Depression, the Cold War, the collapse of the Soviet Union, the reemergence of China, the recent financial crisis, … Some people refer to this phenomenon as “Moore’s law,” but… [this] is just one paradigm among many.

From Ray Kurzweil,

Auerswald claims that the advance of “code” (that is, technologies like writing that allow us to encode information) has, for the past 40,000 years or so, supplemented and enhanced human abilities, making our lives better. Auerswald is not afraid of increasing mechanization and robotification of the economy putting people out of jobs because he believes that computers and humans are good at fundamentally different things. Computers, in fact, were invented to do things we are bad at, like decode encryption, not stuff we’re good at, like eating.

The advent of computers, in his view, lets us concentrate on the things we’re good at, while off-loading the stuff we’re bad at to the machines.

Kurzweil’s view is different. While he agrees that computers were originally invented to do things we’re bad at, he also thinks that the computers of the future will be very different from those of the past, because they will be designed to think like humans.

A computer that can think like a human can compete with a human–and since it isn’t limited in its processing power by pelvic widths, it may well out-compete us.

But Kurzweil does not seem worried:

Ultimately we will create an artificial neocortex that has the full range and flexibility of its human counterpart. …

When we augment our own neocortex with a synthetic version, we won’t have to worry about how much additional neocortex can physically fit into our bodies and brains, as most of it will be in the cloud, like most of the computing we use today. I estimated earlier that we have on the order of 300 million pattern recognizers in our biological neocortex. That’s as much as could b squeezed into our skulls even with the evolutionary innovation of a large forehead and with the neocortex taking about 80 percent of the available space. As soon as we start thinking in the cloud, there will be no natural limits–we will be able to use billions or trillions of pattern recognizers, basically whatever we need. and whatever the law of accelerating returns can provide at each point in time. …

Last but not least, we will be able to back up the digital portion of our intelligence. …

That is kind of what I already do with this blog. The downside is that sometimes you people see my incomplete or incorrect thoughts.

On the squishy side, Kurzweil writes of the biological brain:

The story of human intelligence starts with a universe that is capable of encoding information. This was the enabling factor that allowed evolution to take place. …

The story of evolution unfolds with increasing levels of abstraction. Atoms–especially carbon atoms, which can create rich information structures by linking in four different directions–formed increasingly complex molecules. …

A billion yeas later, a complex molecule called DNA evolved, which could precisely encode lengthy strings of information and generate organisms described by these “programs”. …

The mammalian brain has a distinct aptitude not found in any other class of animal. We are capable of hierarchical thinking, of understanding a structure composed of diverse elements arranged in a pattern, representing that arrangement with a symbol, and then using that symbol as an element in a yet more elaborate configuration. …

I really want to know if squids or octopuses can engage in symbolic thought.

Through an unending recursive process we are capable of building ideas that are ever more complex. … Only Homo sapiens have a knowledge base that itself evolves, grow exponentially, and is passe down from one generation to another.

Kurzweil proposes an experiment to demonstrate something of how our brains encode memories: say the alphabet backwards.

If you’re among the few people who’ve memorized it backwards, try singing “Twinkle Twinkle Little Star” backwards.

It’s much more difficult than doing it forwards.

This suggests that our memories are sequential and in order. They can be accessed in the order they are remembered. We are unable to reverse the sequence of a memory.

Funny how that works.

On the neocortex itself:

A critically important observation about the neocortex is the extraordinary uniformity of its fundamental structure. … In 1957 Mountcastle discovered the columnar organization of the neocortex. … [In 1978] he described the remarkably unvarying organization of the neocortex, hypothesizing that it was composed of a single mechanism that was repeated over and over again, and proposing the cortical column as the basic unit. The difference in the height of certain layers in different region noted above are simply differences in the amount of interconnectivity that the regions are responsible for dealing with. …

extensive experimentation has revealed that there are in fact repeating units within each column. It is my contention that the basic unit is a pattern organizer and that this constitute the fundamental component of the neocortex.

As I read, Kurzweil’s hierarchical models reminded me of Chomsky’s theories of language–both Ray and Noam are both associated with MIT and have probably conversed many times. Kurzweil does get around to discussing Chomsky’s theories and their relationship to his work:

Language is itself highly hierarchical and evolved to take advantage of the hierarchical nature of the neocortex, which in turn reflects the hierarchical nature of reality. The innate ability of human to lean the hierarchical structures in language that Noam Chomsky wrote about reflects the structure of of the neocortex. In a 2002 paper he co-authored, Chomsky cites the attribute of “recursion” as accounting for the unique language faculty of the human species. Recursion, according to Chomsky, is the ability to put together small parts into a larger chunk, and then use that chunk as a part in yet another structure, and to continue this process iteratively In this way we are able to build the elaborate structure of sentences and paragraphs from a limited set of words. Although Chomsky was not explicitly referring here to brain structure, the capability he is describing is exactly what the neocortex does. …

This sounds good to me, but I am under the impression that Chomsky’s linguistic theories are now considered outdated. Perhaps that is only his theory of universal grammar, though. Any linguistics experts care to weigh in?

According to Wikipedia:

Within the field of linguistics, McGilvray credits Chomsky with inaugurating the “cognitive revolution“.[175] McGilvray also credits him with establishing the field as a formal, natural science,[176] moving it away from the procedural form of structural linguistics that was dominant during the mid-20th century.[177] As such, some have called him “the father of modern linguistics”.[178][179][180][181]

The basis to Chomsky’s linguistic theory is rooted in biolinguistics, holding that the principles underlying the structure of language are biologically determined in the human mind and hence genetically transmitted.[182] He therefore argues that all humans share the same underlying linguistic structure, irrespective of sociocultural differences.[183] In adopting this position, Chomsky rejects the radical behaviorist psychology of B. F. Skinner which views the mind as a tabula rasa (“blank slate”) and thus treats language as learned behavior.[184] Accordingly, he argues that language is a unique evolutionary development of the human species and is unlike modes of communication used by any other animal species.[185][186] Chomsky’s nativist, internalist view of language is consistent with the philosophical school of “rationalism“, and is contrasted with the anti-nativist, externalist view of language, which is consistent with the philosophical school of “empiricism“.[187][174]

Anyway, back to Kuzweil, who has an interesting bit about love:

Science has recently gotten into the act as well, and we are now able to identify the biochemical changes that occur when someone falls in love. Dopamine is released, producing feelings of happiness and delight. Norepinephrin levels soar, which lead to a racing heart and overall feelings of exhilaration. These chemicals, along with phenylethylamine, produce elevation, high energy levels, focused attention, loss of appetite, and a general craving for the object of one’s desire. … serotonin level go down, similar to what happens in obsessive-compulsive disorder….

If these biochemical phenomena sound similar to those of the flight-or-fight syndrome, they are, except that we are running toward something or someone; indeed, a cynic might say toward rather than away form danger. The changes are also fully consistent with those of the early phase of addictive behavior. …  Studies of ecstatic religious experiences also show the same physical phenomena, it can be said that the person having such an experiences is falling in love with God or whatever spiritual connection on which they are focused. …

Religious readers care to weigh in?

Consider two related species of voles: the prairie vole and the montane vole. They are pretty much identical, except that the prairie vole has receptors for oxytocin and vasopressin, whereas the montane vole does not. The prairie vole is noted for lifetime monogamous relationships, while the montane vole resorts almost exclusively to one-night stands.

Learning by species:

A mother rat will build a nest for her young even if she has never seen another rat in her lifetime. Similarly, a spider will spin a web, a caterpillar will create her own cocoon, and a beaver will build a damn, even if no contemporary ever showed them how to accomplish these complex tasks. That is not to say that these are not learned behavior. It is just that he animals did not learn them in a single lifetime… The evolution of animal behavior does constitute a learning process, but it is learning by the species, not by the individual and the fruits of this leaning process are encoded in DNA.

I think that’s enough for today; what did you think? Did you enjoy the book? Is Kurzweil on the right track with his pattern recognizers? Are non-biological neocortexes on the horizon? Will we soon convert the solar system to computronium?

Let’s continue this discussion next Monday–so if you haven’t read the book yet, you still have a whole week to finish.

 

A Little Review of Big Data Books

I recently finished three books on “big data”– Big Data: A Revolution That Will Transform How We Live, Work, and Think, by Viktor Mayer-Schönberger and Kenneth Cukier; Everybody Lies: Big Data, New Data, and What the Internet can tell us about who we Really Are, by Seth Stephens-Davidowitz; and Big Data At Work: Dispelling the Myths, Uncovering the opportunities, by Thomas H. Davenport.

None of these books was a whiz-bang thriller, but I enjoyed them.

Big Data was a very sensible introduction. What exactly is “big data”? It’s not just bigger data sets (though it is also that.) It’s the opportunity to get all the data.

Until now, the authors point out, we have lived in a data poor world. We have had to carefully design our surveys to avoid sampling bias because we just can’t sample that many people. There’s a whole bunch of math done over in statistics to calculate how certain we can be about a particular result, or whether it could just be the result of random chance biasing our samples. I could poll 10,000 people about their jobs, and that might be a pretty good sample, but if everyone I polled happens to live within walking distance of my house, is this a very representative sample of everyone in the country? Now think about all of those studies on the mechanics of sleep done on whatever college students or homeless guys a scientist could convince to sleep in a lab for a week. How representative are they?

Today, though, we suddenly live in a data rich world. An exponentially data rich world. A world in which we no longer need to correct for bias in our sample, because we don’t have to sample. We can just get… all the data. You can go to Google and find out how many people searched for “rabbit” on Tuesday, or how many people misspelled “rabbit” in various ways.

Data is being used in new and interesting (and sometimes creepy) ways. Many things that previously weren’t even considered data are now being quantitized–like one researcher quantitizing people’s backsides to determine whether a car is being driven by its owner, or a stranger.

One application I find promising is using people’s searches for various disease symptoms to identify people who may have various diseases before they seek out a doctor. Catching cancer patients earlier could save millions of lives.

I don’t have the book in front of me anymore, so I am just going by memory, but it made a good companion to Auerswald’s The Code Economy, since the modern economy runs so much on data.

Everybody Lies was a much more lighthearted, annecdotal approach to the subject, discussing lots of different studies. Davidowitz was inspired by Freakonomics, and he wants to use Big Data to uncover hidden truths of human behavior.

The book discusses, for example, people’s pornographic searches, (as per the title, people routinely lie about how much porn they look at on the internet,) and whether people’s pornographic preferences can be used to determine what percent of people in each state are gay. It turns out that we can get a break down of porn queries by state and variety, allowing a rough estimate of the gay and straight population of each state–and it appears that what people are willing to tell pollsters about their sexuality doesn’t match what they search for online. In more conservative states, people are less likely to admit to pollsters that they are gay, but plenty of supposedly “straight” people are searching for gay porn–about the same number of people as actually admit to being gay in more liberal states.

Stephens-Davidowitz uses similar data to determine that people have been lying to pollsters (or perhaps themselves) about whom they plan to vote for. For example, Donald Trump got anomalously high votes in some areas, and Obama got anomalously low votes, compared to what people in those areas told pollsters. However, both of these areas correlated highly with areas of the country where people made a lot of racist Google searches.

Most of the studies discussed are amusing, like the discovery of the racehorse American Pharaoh. Others are quite important, like a study that found that child abuse was probably actually going up at a time when official reports said it wasn’t–the reports probably weren’t showing abuse due to a decrease in funding for investigating abuse.

At times the author steps beyond the studies and offers interpretations of why the results are the way they are that I think go beyond what the data tells, like his conclusion that parents are biased against their daughters because they are more concerned with girls being fat than with boys, or because they are more likely to Google “is my son a genius?” than “is my daughter a genius?”

I can think of a variety of alternative explanations. eg, society itself is crueler to overweight women than to overweight men, so it is reasonable, in turn, for parents to worry more about a daughter who will face cruelty than a boy who will not. Girls are more likely to be in gifted programs than boys, but perhaps this means that giftedness in girls is simply less exceptional than giftedness in boys, who are more unusual. Or perhaps male giftedness is different from female giftedness in some way that makes parents need more information on the topic.

Now, here’s an interesting study. Google can track how many people make Islamophobic searches at any particular time. Compared against Obama’s speech that tried to calm outrage after the San Bernardino attack, this data reveals that the speech was massively unsuccessful. Islamophobic searches doubled during and after the speech. Negative searches about Syrian refugees rose 60%, while searches asking how to help dropped 35%.

In fact, just about every negative search we cold think to test regarding Muslims shot up during and after Obama’s speech, and just about every positive search we could think to test declined. …

Instead of calming the angry mob, as everybody thought he was doing, the internet data tells us that Obama actually inflamed it.

However, Obama later gave another speech, on the same topic. This one was much more successful. As the author put it, this time, Obama spent little time insisting on the value of tolerance, which seems to have just made people less tolerant. Instead, “he focused overwhelmingly on provoking people’s curiosity and changing their perceptions of Muslim Americans.”

People tend to react positively toward people or things they regard as interesting, and invoking curiosity is a good way to get people interested.

The author points out that “big data” is most likely to be useful in fields where the current data is poor. In the case of American Pharaoh, for examples, people just plain weren’t getting a lot of data on racehorses before buying and selling them. It was a field based on people who “knew” horses and their pedigrees, not on people who x-rayed horses to see how big their hearts and lungs were. By contrast, hedge funds investing in the stock market are already up to their necks in data, trying to maximize every last penny. Horse racing was ripe for someone to become successful by unearthing previously unused data and making good predictions; the stock market is not.

And for those keeping track of how many people make it to the end of the book, I did. I even read the endnotes, because I do that.

Big Data At Work was very different. Rather than entertain us with the success of Google Flu or academic studies of human nature, BDAW discusses how to implement “big data” (the author admits it is a silly term) strategies at work. This is a good book if you own, run, or manage a business that could utilize data in some way. UPS, for example, uses driving data to minimize package delivery routes; even a small saving per package by optimizing routes leads to a large saving for the company as a whole, since they deliver so many packages.

The author points out that “big data” often isn’t big so much as unstructured. Photographs, call logs, Facebook posts, and Google searches may all be “data,” but you will need some way to quantitize these before you can make much use of them. For example, companies may want to gather customer feedback reports, feed them into a program that recognizes positive or negative language, and then quantitizes how many people called to report that they liked Product X vs how many called to report that they disliked it.

I think an area ripe for this kind of quantitization is medical data, which currently languishes in doctors’ files, much of it on paper, protected by patient privacy laws. But people post a good deal of information about their medical conditions online, seeking help from other people who’ve dealt with the same diseases. Currently, there are a lot of diseases (take depression) where treatment is very hit-or-miss, and doctors basically have to try a bunch of drugs in a row until they find one that works. A program that could trawl through forum posts and assemble data on patients and medical treatments that worked or failed could help doctors refine treatment for various difficult conditions–“Oh, you look like the kind of patient who would respond well to melatonin,” or “Oh, you have the characteristics that make you a good candidate for Prozac.”

The author points out that most companies will not be able to keep the massive quantities of data they are amassing. A hospital, for example, collects a great deal of data about patient’s heart rates and blood oxygen levels every day. While it might be interesting to look back at 10 years worth of patient heart rate data, hospitals can’t really afford to invest in databanks to store all of this information. Rather, what companies need is real-time or continuous data processing that analyzes current data and makes predictions/recommendations for what the company (or doctor) should do now.

For example, one of the books (I believe it was “Big Data”) discussed a study of premature babies which found, counter-intuitively, that they were most likely to have emergencies soon after a lull in which they had seemed to be doing rather well–stable heart rate, good breathing, etc. Knowing this, a hospital could have a computer monitoring all of its premature babies and automatically updating their status (“stable” “improving” “critical” “likely to have a big problem in six hours”) and notifying doctors of potential problems.

The book goes into a fair amount of detail about how to implement “big data solutions” at your office (you may have to hire someone who knows how to code and may even have to tolerate their idiosyncrasies,) which platforms are useful for data, the fact that “big data” is not all that different from standard analytics that most companies already run, etc. Once you’ve got the data pumping, actual humans may not need to be involved with it very often–for example you may have a system that automatically updates drives’ routes with traffic reports, or sprinklers that automatically turn on when the ground gets too dry.

It is easy to see how “big data” will become yet another facet of the algorithmization of work.

Overall, Big Data at Work is a good book, especially if you run a company, but not as amusing if you are just a lay reader. If you want something fun, read the first two.

Book Club: The Code Economy, Chapter 11: Education and Death

Welcome back to EvX’s book club. Today we’re reading Chapter 11 of The Code Economy, Education.

…since the 1970s, the economically fortunate among us have been those who made the “go to college” choice. This group has seen its income row rapidly and its share of the aggregate wealth increase sharply. Those without a college education have watched their income stagnate and their share of the aggregate wealth decline. …

Middle-age white men without a college degree have been beset by sharply rising death rates–a phenomenon that contrasts starkly with middle-age Latino and African American men, and with trends in nearly every other country in the world.

It turns out that I have a lot of graphs on this subject. There’s a strong correlation between “white death” and “Trump support.”

White vs. non-white Americans

American whites vs. other first world nations

source

But “white men” doesn’t tell the complete story, as death rates for women have been increasing at about the same rate. The Great White Death seems to be as much a female phenomenon as a male one–men just started out with higher death rates in the first place.

Many of these are deaths of despair–suicide, directly or through simply giving up on living. Many involve drugs or alcohol. And many are due to diseases, like cancer and diabetes, that used to hit later in life.

We might at first think the change is just an artifact of more people going to college–perhaps there was always a sub-set of people who died young, but in the days before most people went to college, nothing distinguished them particularly from their peers. Today, with more people going to college, perhaps the destined-to-die are disproportionately concentrated among folks who didn’t make it to college. However, if this were true, we’d expect death rates to hold steady for whites overall–and they have not.

Whatever is affecting lower-class whites, it’s real.

Auerswald then discusses the “Permanent income hypothesis”, developed by Milton Friedman: Children and young adults devote their time to education, (even going into debt,) which allows us to get a better job in mid-life. When we get a job, we stop going to school and start saving for retirement. Then we retire.

The permanent income hypothesis is built into the very structure of our society, from Public Schools that serve students between the ages of 5 and 18, to Pell Grants for college students, to Social Security benefits that kick in at 65. The assumption, more or less, is that a one-time investment in education early in life will pay off for the rest of one’s life–a payout of such returns to scale that it is even sensible for students and parents to take out tremendous debt to pay for that education.

But this is dependent on that education actually paying off–and that is dependent on the skills people learn during their educations being in demand and sufficient for their jobs for the next 40 years.

The system falls apart if technology advances and thus job requirements change faster than once every 40 years. We are now looking at a world where people’s investments in education can be obsolete by the time they graduate, much less by the time they retire.

Right now, people are trying to make up for the decreasing returns to education (a highschool degree does not get you the same job today as it did in 1950) by investing more money and time into the single-use system–encouraging toddlers to go to school on the one end and poor students to take out more debt for college on the other.

This is probably a mistake, given the time-dependent nature of the problem.

The obvious solution is to change how we think of education and work. Instead of a single, one-time investment, education will have to continue after people begin working, probably in bursts. Companies will continually need to re-train workers in new technology and innovations. Education cannot be just a single investment, but a life-long process.

But that is hard to do if people are already in debt from all of the college they just paid for.

Auerswald then discusses some fascinating work by Bessen on how the industrial revolution affected incomes and production among textile workers:

… while a handloom weaver in 1800 required nearly forty minutes to weave a yard of coarse cloth using a single loom, a weaver in 1902 could do the same work operating eighteen Nothrop looms in less than a minute, on average. This striking point relates to the relative importance of the accumulation of capital to the advance of code: “Of the roughly thirty-nine-minute reduction in labor time per yard, capital accumulation due to the changing cost of capital relative to wages accounted for just 2 percent of the reduction; invention accounted for 73 percent of the reduction; and 25 percent of the time saving came from greater skill and effort of the weavers.” … “the role of capital accumulation was minimal, counter to the conventional wisdom.”

Then Auerswald proclaims:

What was the role of formal education in this process? Essentially nonexistent.

Boom.

New technologies are simply too new for anyone to learn about them in school. Flexible thinkers who learn fast (generalists) thus benefit from new technologies and are crucial for their early development. Once a technology matures, however, it becomes codified into platforms and standards that can be taught, at which point demand for generalists declines and demand for workers with educational training in the specific field rises.

For Bessen, formal education and basic research are not the keys to the development of economies that they are often represented a being. What drives the development of economies is learning by doing and the advance of code–processes that are driven at least as much by non-expert tinkering as by formal research and instruction.

Make sure to read the endnotes to this chapter; several of them are very interesting. For example, #3 begins:

“Typically, new technologies demand that a large number of variables be properly controlled. Henry Bessemer’s simple principle of refining molten iron with a blast of oxygen work properly only at the right temperatures, in the right size vessel, with the right sort of vessel refractory lining, the right volume and temperature of air, and the right ores…” Furthermore, the products of these factories were really one that, in the United States, previously had been created at home, not by craftsmen…

#8 states:

“Early-stage technologies–those with relatively little standardized knowledge–tend to be used at a smaller scale; activity is localized; personal training and direct knowledge sharing are important, and labor markets do not compensate workers for their new skills. Mature technologies–with greater standardized knowledge–operate at large scale and globally, market permitting; formalized training and knowledge are more common; and robust labor markets encourage workers to develop their own skills.” … The intensity of of interactions that occur in cities is also important in this phase: “During the early stages, when formalized instruction is limited, person-to-person exchange is especially important for spreading knowledge.”

This reminds me of a post on Bruce Charlton’s blog about “Head Girl Syndrome“:

The ideal Head Girl is an all-rounder: performs extremely well in all school subjects and has a very high Grade Point Average. She is excellent at sports, Captaining all the major teams. She is also pretty, popular, sociable and well-behaved.

The Head Girl will probably be a big success in life…

But the Head Girl is not, cannot be, a creative genius.

*

Modern society is run by Head Girls, of both sexes, hence there is no place for the creative genius.

Modern Colleges aim at recruiting Head Girls, so do universities, so does science, so do the arts, so does the mass media, so does the legal profession, so does medicine, so does the military…

And in doing so, they filter-out and exclude creative genius.

Creative geniuses invent new technologies; head girls oversee the implementation and running of code. Systems that run on code can run very smoothly and do many things well, but they are bad at handling creative geniuses, as many a genius will inform you of their public school experience.

How different stages in the adoption of new technology and its codification into platforms translates into wages over time is a subject I’d like to see more of.

Auerswald then turns to the perennial problem of what happens when not only do the jobs change, they entirely disappear due to increasing robotification:

Indeed, many of the frontier business models shaping the economy today are based on enabling a sharp reduction in the number of people required to perform existing tasks.

One possibility Auerswald envisions is a kind of return to the personalized markets of yesteryear, when before massive industrial giants like Walmart sprang up. Via internet-based platforms like Uber or AirBNB, individuals can connect directly with people who’d like to buy their goods or services.

Since services make up more than 84% of the US economy and an increasingly comparable percentage in coutnries elsewhere, this is a big deal.

It’s easy to imagine this future in which we are all like some sort of digital Amish, continually networked via our phones to engage in small transactions like sewing a pair of trousers for a neighbor, mowing a lawn, selling a few dozen tacos, or driving people to the airport during a few spare hours on a Friday afternoon. It’s also easy to imagine how Walmart might still have massive economies of scale over individuals and the whole system might fail miserably.

However, if we take the entrepreneurial perspective, such enterprises are intriguing. Uber and Airbnb work by essentially “unlocking” latent assets–time when people’s cars or homes were sitting around unused. Anyone who can find other, similar latent assets and figure out how to unlock them could become similarly successful.

I’ve got an underutilized asset: rural poor. People in cities are easy to hire and easy to direct toward educational opportunities. Kids growing up in rural areas are often out of the communications loop (the internet doesn’t work terribly well in many rural areas) and have to drive a long way to job interviews.

In general, it’s tough to network large rural areas in the same ways that cities get networked.

On the matter of why peer-to-peer networks have emerged in certain industries, Auerswald makes a claim that I feel compelled to contradict:

The peer-to-peer business models in local transportation, hospitality, food service, and the rental of consumer goods were the first to emerge, not because they are the most important for the economy but because these are industries with relatively low regulatory complexity.

No no no!

Food trucks emerged because heavy regulations on restaurants (eg, fire code, disability access, landscaping,) have cut significantly into profits for restaurants housed in actual buildings.

Uber emerged because the cost of a cab medallion–that is, a license to drive a cab–hit 1.3 MILLION DOLLARS in NYC. It’s a lucrative industry that people were being kept out of.

In contrast, there has been little peer-to-peer business innovation in healthcare, energy, and education–three industries that comprise more than a quarter of the US GDP–where regulatory complexity is relatively high.

Again, no.

There is a ton of competition in healthcare; just look up naturopaths and chiropractors. Sure, most of them are quacks, but they’re definitely out there, competing with regular doctors for patients. (Midwives appear to be actually pretty effective at what they do and significantly cheaper than standard ob-gyns.)

The difficulty with peer-to-peer healthcare isn’t regulation but knowledge and equipment. Most Americans own a car and know how to drive, and therefore can join Uber. Most Americans do not know how to do heart surgery and do not have the proper equipment to do it with. With training I might be able to set a bone, but I don’t own an x-ray machine. And you definitely don’t want me manufacturing my own medications. I’m not even good at making soup.

Education has tons of peer-to-peer innovation. I homeschool my children. Sometimes grandma and grandpa teach the children. Many homeschoolers join consortia that offer group classes, often taught by a knowledgeable parent or hired tutor. Even people who aren’t homeschooling their kids often hire tutors, through organizations like Wyzant or afterschool test-prep centers like Kumon. One of my acquaintances makes her living primarily by skype-tutoring Koreans in English.

And that’s not even counting private schools.

Yes, if you want to set up a formal “school,” you will encounter a lot of regulation. But if you just want to teach stuff, there’s nothing stopping you except your ability to find students who’ll pay you to learn it.

Now, energy is interesting. Here Auerswsald might be correct. I have trouble imagining people setting up their own hydroelectric dams without getting into trouble with the EPA (not to mention everyone downstream.)

But what if I set up my own windmill in my backyard? Can I connect it to the electric grid and sell energy to my neighbors on a windy day? A quick search brings up WindExchange, which says, very directly:

Owners of wind turbines interconnected directly to the transmission or distribution grid, or that produce more power than the host consumes, can sell wind power as well as other generation attributes.

So, maybe you can’t set up your own nuclear reactor, and maybe the EPA has a thing about not disturbing fish, but it looks like you can sell wind and solar energy back to the grid.

I find this a rather exciting thought.

Ultimately, while Auerswald does return to and address the need to radically change how we think about education and the education-job-retirement lifepath, he doesn’t return to the increasing white death rate. Why are white death rates increasing faster than other death rates, and will transition to the “gig economy” further accelerate this trend? Or was the past simply anomalous for having low white death rates, or could these death rates be driven by something independent of the economy itself?

Now, it’s getting late, so that’s enough for tonight, but what are your thoughts? How do you think this new economy–and educational landscape–will play out?