A lot of culture–aside from that time your parents dragged you to the ballet–is what we would, in honest moments, classify as “stupid things people used to do/believe.”
Now, yes, I know, it’s a bit outre for an anthropologist to declare that large swathes of culture are “stupid,” but I could easily assemble a list of hundreds of stupid things, eg:
The Aztecs practiced human sacrifice because they believed that if they didn’t, the world would come to an end.
In Britain, people used to believe that you could literally eat the sins of a recently deceased person, speeding their entry into Heaven. There were professional sin eaters, the last of whom, Richard Munslow, died in 1906.
Americans started eating breakfast cereal as part of an anti-masturbation campaign, and in Africa, many girls have their clitorises cut off and vaginas sewn nearly shut in a much more vigorous anti-masturbation campaign.
The Etoro of Papua New Guinea believed that young boys between the ages of 7 and 17 must “ingest” the semen of older men daily in order to mature into men.
In Mozambique, there are people who kill bald men to get the gold they supposedly have inside their heads; in the DRC, there’s a belief that eating Pygmy people will give you magic powers.
People in Salem, Massachusetts, believed that teenage girls were a good source of information on which older women in the community were witches and needed to be hanged.
Flutes assume all sorts of strange roles in various Papuan and a few Brazilian cultures–only men are allowed to see, play, or listen to the flutes, and any women who violate the flute taboo are gang raped or executed. Additionally, “…the Keraki perform flute music when a boy has been sodomized and they fear he is pregnant. This summons spirits who will protect him from such humiliation.”
Spirit possession–the belief that a god or deity can take control of and speak/dance/act through a worshiper–is found in many traditions, including West African and Haitian Voodoo. If you read Things Fall Apart, then you remember the egwugwu, villagers dressed in masks who were believed to become the spirits of gods and ancestors. Things “fall apart” after a Christian convert “kills” one of the gods by unmasking him, leading other villagers to retaliate against the local Christian mission by burning it down.
In India, people traditionally murdered their moms by pushing them into their father’s funeral pyres (and those were the guys who didn’t go around randomly strangling people because a goddess told them to).
People in ancient [pretty much everywhere] believed that the gods and the deceased could receive offerings (burnt or otherwise,) of meat, chairs, clothes, games, slaves, etc. The sheer quantity of grave goods buried with the deceased sometimes overwhelmed the local economy, like in ancient Egypt.
Then there’s sympathetic magic, by which things with similar properties (say, yellow sap and yellow fever, or walnuts that look like brains and actual brains) are believed to have an effect on each other.
Madagascar has a problem with bubonic plague because of a local custom of digging up dead bodies and dancing around with them.
People all over the world–including our own culture–turn down perfectly good food because it violates some food taboo they hold.
All of these customs are either stupid or terrible ideas. Of course the dead do not really come back, Zeus does not receive your burnt offering, you can’t cure yellow fever by painting someone yellow and washing off the paint or by lying in a room full of snakes, and the evil eye isn’t real, despite the fact that progressives are convinced it is. A rabbit’s foot won’t make you lucky and neither will a 4-leaf clover, and your horoscope is meaningless twaddle.
Obviously NOT ALL culture is stupid. Most of the stuff people do is sensible, because if it weren’t, they’d die out. Good ideas have a habit of spreading, though, making them less unique to any particular culture.
Many of the bad ideas people formerly held have been discarded over the years as science and literacy have given people the ability to figure out whether a claim is true or not. Superstitions about using pendulums to tell if a baby is going to be a boy or a girl have been replaced with ultrasounds, which are far more reliable. Bleeding sick patients has been replaced with antibiotics and vaccinations; sacrifices to the gods to ensure good weather have been replaced with irrigation systems.
In effect, science and technology have replaced much of the stuff that used to count as “culture.” This is why I say “science is my culture.” This works for me, because I’m a nerd, but most people aren’t all that emotionally enthralled by science. They feel a void where all of the fun parts of culture have been replaced.
Yes, the fun parts.
I like that I’m no longer dependent on the whims of the rain gods to water my crops and prevent starvation, but this also means I don’t get together with all of my family and friends for the annual rain dance. It means no more sewing costumes and practicing steps; no more cooking a big meal for everyone to enjoy. Culture involves all of the stuff we invest with symbolic meaning about the course of our lives, from birth to coming of age to marriage, birth of our own children, to old age and death. It carries meaning for families, love, and friendship. And it gives us a framework for enjoyable activities, from a day of rest from our labors to the annual “give children candy” festival.
So when people say, “Whites have no culture,” they mean four things:
A fish does not notice the water it swims in–whites have a culture, but don’t notice it because they are so accustomed to it
Most of the stupid/wrong things whites used to do that we call “culture” have been replaced by science/technology
That science/technology has spread to other cultures because it is useful, rendering white culture no longer unique
Technology/science/literacy have rendered many of the fun or emotionally satisfying parts of ritual and culture obsolete.
Too often people denigrate the scientific way of doing things on the grounds that it isn’t “cultural.” This comes up when people say things like “Indigenous ways of knowing are equally valid as Western ways of knowing.” This is a fancy way of saying that “beliefs that are ineffective at predicting the weather, growing crops, curing diseases, etc, are just as correct as beliefs that are effective at doing these things,” or [not 1]=.
We shouldn’t denigrate doing things in ways that actually work; science must be respected as valid. We should, however, find new ways to give people an excuse to do the fun things that used to be tied up in cultural rituals.
While researching my post on Music and Sex, I noticed a consistent pattern: fame is terrible for people.
Too many musicians to list have died from drug overdoses or suicide. Elvis died of a drug overdose. John Lennon attracted the attention of a crazy fan who assassinated him. Curt Cobain killed himself (or, yes, conspiracy theorists note, might have been murdered.) Linkin Park’s Chester Bennington committed suicide. Alice in Chains’s Layne Staley died of heroin. The list continues.
Far more have seen their personal relationships fail, time after time. The lives of stars are filled with breakups and drama, not just because tabloids care to report on them, but also because of the drugs, wealth, and easy availability of other partners.
At least musicians get something (money, sex,) out of their fame, and most went willingly into it (child stars, not so much). But many people today are thrust completely unwillingly into the spotlight and get nothing from it–people caught on camera in awkward incidents, or whose absurd video suddenly went viral for all the wrong reasons, or who caught the notice of an internet mob.
Here we have people like the students from Covington Catholic, or the coffee shop employee who lost her job after not serving a black woman who arrived after the shop had closed, or, for that matter, almost all of the survivors of mass shootings, especially the ones that attract conspiracy theorists.
It seems that fame, like many other goods, is a matter of decreasing returns. Going from zero fame to a little fame is nearly always good. Companies have to advertise products so customers know they exist. Being known as an expert in your field will net you lots of business, recommendations, or just social capital. Being popular in your school or community is generally pleasant.
At this level, increasing fame means increasing numbers of people who know and appreciate your work, while still remaining obscure enough that people who don’t like or care for your creations will simply ignore you.
Beyond a certain level of fame, though, you’ve already gotten the attention of most people who like you, and are now primarily reaching people who aren’t interested or don’t like you. If you become sufficiently famous, your fame alone will drive people who dislike your work to start complaining about how stupid it is that someone who makes such terrible work can be so famous. No one feels compelled to talk about how much they hate a local indie band enjoyed by a few hundred teens, but millions of people vocally hate Marilyn Manson.
Sufficient fame, therefore, attracts more haters than lovers.
This isn’t too big a deal if you’re a rock star, because you at least still have millions of dollars and adoring fans. This is a big deal if you’re just an ordinary person who accidentally became famous and wasn’t prepared in any way to make money or deal with the effects of a sudden onslaught of hate.
Fame wasn’t always like this, because media wasn’t always like this. There were no million-album recording artists in the 1800s. There were no viral internet videos in the 1950s. Just like in Texas, in our winner-take-all economy, fame is bigger–and thus so are its effects.
I think we need to tread this fame-ground very carefully. Recognize when we (or others) are thrusting unprepared people into the spotlight and withdraw from mobbing tactics. Teenagers, clearly, should not be famous. But more mundane people, like writers who have to post under their real (or well-known pseudonyms), probably also need to take steps to insulate themselves from the spasms of random mobs of haters. The current trend of writers taking mobs–at least SJW mobs–seriously and trying to appease them is another effect of people having fame thrust upon them that they don’t know how to deal with.
In Sociobiology, E. O. Wilson defines a “population” as a group that (more or less) inter-breeds freely, while a “society” is a group that communicates. Out in nature, the borders of a society and a population are usually the same, but not always.
Modern communication has created a new, interesting human phenomenon–our “societies” no longer match our “populations.”
Two hundred years ago, news traveled only as fast as a horse (or a ship,) cameras didn’t exist, and newspapers/books were expensive. By necessity, people got most of their information from the other people around them. One hundred yeas ago, the telegraph had sped up communication, but photography was expensive, movies had barely been invented, and information still traveled slowly. News from the front lines during WWI arrived home well after the battles occurred–probably contributing significantly to the delay in realizing that military strategies were failing horrifically.
Today, the internet/TV/cheap printing/movies/etc are knitting nations into conversational blocks limited only by language (and even that is a small barrier, given the automation of pretty effective translation), but still separated by national borders. It’s fairly normal now to converse daily with people from three or four different countries, but never actually meet them.
This is really new, really different, and kind of weird.
Since we can all talk to each other, people are increasingly, it seems, treating each other as one big society, despite the fact that we hail from different cultures and live under different governments. What happens in one country or to one group of people reverberates across the world. An American comforts a friend in Malaysia who is sick to her stomach because of a shooting in New Zealand. Both agree that the shooting actually had nothing to do with a popular Swedish YouTuber, despite the shooter enjoining his viewers (while livestreaming the event) to “subscribe to Pewdiepie.” Everything is, somehow, the fault of the American president, or maybe we should go back further, and blame the British colonists.
It’s been a rough day for a lot of people.
Such big “societies” are unwieldy. Many of us dislike each other. We certainly can’t control each other (not without extreme tactics), and no one likes feeling blamed for someone else’s actions. Yet we all want each other to behave better, to improve. How to improve is a tough question.
We also want to be treated well by each other, but how often do we encounter people who are simply awful?
The same forces that knit us together also split us apart–and what it means to be a society but not a population remains to be seen.
This is a review for Alita: Battle Angel, now out in theaters. If you want the review without spoilers, scroll down quickly to the previous post.
It is difficult for any movie to be truly deep. Is Memento deep, or does it just use a backwards-narrative gimmick? Often meaning is something we bring to movies–we interpret them based on our own experiences.
What is the point of cyborgs? They are the ultimate fusion of man and machine. Our technology doesn’t just serve us; it has become us. What are we, then? Are cyborgs human, or more than human? And what of the un-enhanced meatsacks left behind?
Throughout the movie, we see humans with various levels of robotic enhancement, from otherwise normal people with an artificial limb to monstrous brawlers that are almost unrecognizable as human. Alita is a complete cyborg whose only “human” remain left is her biological brain (perhaps she has a skull, too.) The rest of her, from heart to toes, is machine, and can be disassembled and replaced as necessary.
The graphic novels go further than Alita–in one case, a whole community breaks down after it discovers that the adults have had their brains replaced with computer chips. Can a “human” have a metal body but a meat brain? Can a “human” have a meat body but a computer brain? Alita says yes, that humanity is more than just the raw material we are built of.
(It also goes much less–is Ido’s jet-powered hammer that he uses in battle any different from a jet-powered hammer built into your arm? Does it matter whether you can put the technology down and pack it into a suitcase at the end of the day, or if it is built into your core?)
Yet cyborgs in Alita’s world, despite their obvious advantage over mere humans in terms of speed, reflexes, strength, and ability to switch your arms out for power saws, are mostly true to their origin as disabled people whose bodies were replaced with artificial limbs. Alita’s first body, given to her at the beginning of the movie after she is found without one, was originally built for a little girl in a wheelchair. She reflects to a friend that she is now fast because the little girl’s father built her a fast pair of legs so she could finally run.
The upper class–to the extent that we see them–has no obvious enhancements. Indeed, the most upper class family we meet in the movie, which originally lived in the floating city of Tiphares (Zalem in the movie) was expelled from the city and sent down to the scrap yard with the rest of the trash because of their disabled daughter–the one whose robotic body Alita inherits.
Hugo is an ordinary meat boy with what we may interpret as a serious prejudice against cyborgs–though he comes across as a nice lad, he moonlights as a thief who who kidnaps cyborgs and chops off their body parts for sale on the black market. Hugo justifies himself by claiming he “never killed anyone,” which is probably true, but the process certainly hurts the cyborgs (who cry out in pain as their limbs are sawed off,) and leaves them lying disabled in the street.
Hugo isn’t doing it because he hates cyborgs, though. They’re just his ticket to money–the money he needs to get to Tiphares/Zalem. For even though it is said that no one in the Scrap Yard (Iron City in the movie) is ever allowed into Tiphares, people still dream of Heaven. Hugo believes a notorious fixer named Vector can get him into Tiphares if he just pays him enough money.
Some reviewers have identified Vector as the Devil himself, based on his line, “Better to reign in Hell than serve in Heaven,” which the Devil speaks in Milton’s Paradise Lost–though Milton is himself reprising Achilles in the Odyssey, who claims, “By god, I’d rather slave on earth for another man / some dirt-poor tenant farmer who scrapes to keep alive / than rule down here over all the breathless dead.”
Yet the Scrap Yard is not Hell. Hell is another layer down; it is the sewers below the Scrap Yard, where Alita’s first real battle occurs. The Scrap Yard is Purgatory; the Scrap Yard is Earth, suspended between both Heaven and Hell, from which people can chose to arise (to Tiphares) or descend (to the sewers.) But whether Tiphares is really Heaven or just a dream they’ve been sold remains to be seen–for everyone in the Scrap Yard is fallen and none may enter Heaven.
Alita, you probably noticed, descended into Hell to fight an evil monster–in the manga, because he kidnapped a baby; in the movie because he was trying to kill her. In the ensuing battle, she is crushed and torn to pieces, sacrificing her final limb to drill out the monster’s eye. Her unconscious corpse is rescued by her friends, dragged back to the surface, and then rebuilt with a new body.
“I do not stand by in the presence of evil”–Alita
“But let me reveal to you a wonderful secret. We will not all die, but we will all be transformed!52 It will happen in a moment, in the blink of an eye, when the last trumpet is blown. For when the trumpet sounds, those who have died will be raised to live forever. And we who are living will also be transformed.53 For our dying bodies must be transformed into bodies that will never die; our mortal bodies must be transformed into immortal bodies.” 1 Corinthians, 15
Alita has died and been resurrected. Whether she will ascend into Heaven remains a matter for the sequel. (She does. Obviously.)
Through his relationship with Alita (they smooch), Hugo realizes that cyborgs are people, too, and maybe he shouldn’t chop them up for money. “You are more human than anyone I know,” he tells her.
Alita, in a scene straight from The Last Temptation of Christ, offers Hugo her heart–literally–to sell to raise the remaining money he needs to make it to Tiphares.
Hugo, thankfully, declines the offer, attempting to make it to Tiphares on his own two feet (newly resurrected after Alita saves his life by literally hooking him up to her own life support system)–but no mere mortal can ascend to Tiphares; even giants may not assault the gates of Heaven.
The people of the Scrap Yard are fallen–literally–from Tiphares, their belongings and buildings either relics from the time before the fall or from trash dumped from above. There is hope in the Scrap Yard, yet the Scrap Yard generates very little of its own, explaining its entirely degraded state.
This is a point where the movie fails–the set builders made the set too nice. The Scrap Yard is a decaying, post-apocalyptic waste filled with strung-out junkies and hyper-violent-TV addicts. In one scene in the manga, Doc Ido, injured, collapses in the middle of a crowd while trying to drag the remains of Alita’s crushed body back home so he can fix her. Bleeding, he cries out for help–but the crowd, entranced by the story playing out on the screens around them, ignores them both.
In the movie, the Scrap Yard has things like oranges and chocolate–suggesting long-distance trade and large-scale production–things they really shouldn’t be able to do. In the manga, the lack of police makes sense, as this is a society with no ability to cooperate for the common good. Since the powers that be would like to at least prevent their own deaths at the hands of murderers, the Scrap Yard instead puts bounties on the heads of criminals, and licensed “Hunter Warriors” decapitate them for money.
(A hunter license is not difficult to obtain. They hand them out to teenage girls.)
Here the movie enters its discussion of Free Will.
Alita awakes with no memory of her life before she became a decapitated head sitting in a landfill. She has the body of a young teen and, thankfully, adults willing to look out for her as she learns about life in Iron City from the ground up–first, that oranges have to be peeled; second, that cars can run you over.
The movie adds the backstory about Doc Ido’s deceased, disabled daughter for whom he built the original body that he gives to Alita. This is a good move, as it makes explicit a relationship that takes much longer to develop in the manga (movies just don’t have the same time to develop plots as a manga series spanning decades.) Since Alita has no memory, she doesn’t remember her own name (Yoko). Doc therefore names her “Alita,” after the daughter whose body she now wears.
As an adopted child myself, I feel a certain kinship with narratives about adoption. Doc wants his daughter back. Alita wants to discover her true identity. Like any child, she is growing up, discovering love, and wants different things for her life than her father does.
Despite her amnesia, Alita has certain instincts. When faced with danger, she responds–without knowing how or why–with a sudden explosion of violence, decapitating a cyborg that has been murdering young women in her neighborhood. Alita can fight; she is extremely skilled in an advanced martial art developed for cyborgs. In short, she is a Martian battle droid that has temporarily mistaken itself for a teenage girl.
She begs Ido to hook her up to a stronger body (the one intended for his daughter was not built with combat in mind,) but he refuses, declaring that she has a chance to start over, to become something totally new. She has free will. She can become anything–so why become a battle robot all over again?
But Alita cannot just remain Doc’s little girl. Like all children, she grows–and like most adopted children, she wants to know who she is and where she comes from. She is good at fighting. This is her only connection to her past, and as she asserts, she has a right to that. Doc Ido has no right to dictate her future.
What is Alita? As far as she knows, she is trash, broken refuse literally thrown out through the Tipharean rubbish chute. The worry that you were adopted because you were unwanted by your biological parents–thrown away–plagues many adopted children. But as Alita discovers, this isn’t true. She’s not trash–she’s an alien warrior who once attacked Earth and ended up unconscious in the scrap yard after losing most of her body in the battle. Like the Nephilim, she is a heavenly battle angel who literally fell to Earth.
By day, Ido is a doctor, healing people and fixing cyborgs. By night, he is a Hunter Warrior, killing people. For Ido, killing is expression of rage after his daughter’s death, a way of channeling a psychotic impulse into something that benefits society by aiming it at people even worse than himself. But for Alita, violence serves a greater purpose–she uses her talent to eliminate evil and serve justice. Alita’s will is to protect the people she loves.
After Alita runs away, gets in a fight, descends into Hell, and is nearly completely destroyed, Doc relents and attaches her to a more powerful, warrior body. He recognizes that time doesn’t freeze and he cannot keep Alita forever as his daughter (a theme revisited later in the manga when Nova tries to trap Alita in an alternative-universe simulation where she never becomes a Hunter Warrior.
In an impassioned speech, Nova declares, “I spit upon the second law of thermodynamics!” He wants to freeze time; prevent decay. But even Nova, as we have seen, cannot contain Alita’s will. She knows it is a simulation. She plays along for a bit, enjoying the story, then breaks out.
Alita’s new body uses “nanotechnology,” which is to say, magic, to keep her going. Indeed, the technology in the movie is no more explained than magic in Harry Potter, other than some technobabble about how Alita’s heart contains a miniature nuclear reactor that could power the whole city, which is how she was able to stay alive for 300 years in a trash heap.
With her more powerful body, Alita is finally able to realize herself.
Alita’s maturation from infant (a living head completely unable to move,) to young adult is less explicit in the movie than in the manga, but it is still there–with the reconfiguration of her new body based on Alita’s internal self-image, Doc discovers that “She is a bit older than you thought she was.” In a dream sequence in the original, the metaphors are made explicit–limbless Alita in one scene becomes an infant strapped to Doc’s back as he roots through the dump for parts. Then she receives a pair of arms, and finally legs, turning into a toddler and a girl. Finally, with her berserker body, she achieves adulthood.
But with all of this religious imagery, is Tiphares really heaven? Of course not–if it were, why would Nova–who is the true villain trying to kill her–live there? There was a war in the Heavens–but the Heavens are far beyond Tiphares. Alita will escape Purgatory and ascend to Tiphares–and unlike the others, she will not do it by being chopped into body parts for Nova’s experiments.
For the mind is its own place, and can make a Heaven of Hell, and a Hell of Heaven.
Tiphares is only the beginning, just as the Scrap Yard is not the Hell we take it for.
I have not seen a movie aimed at adults, in an actual theater, in over a decade. Alita: Battle Angel broke my movie fast because I was a huge fan of the manga.
It was marvelous.
I can’t judge the movie from the perspective of someone who has seen every last Marvel installment, nor one who hasn’t read the manga. But it is visually stunning, with epic battle scenes and a philosophical core.
What does it mean to be human? Can robots be human? What about humanoid battle cyborgs? Alita is simultaneously human–a teenage girl searching for her place in this world–and inhuman–a devastating battle droid.
I don’t want to give away too many spoilers, so I’ll showcase the trailer:
Yes, she has giant anime eyes. You get used to it quickly.
I saw it in 3D, which was amazing–the technology we have for making and distributing movies in general is amazing, but this is a film whose action sequences really stand out in the medium.
The story is basically true to its manga inspiration, though there are obvious changes. The original story is much too long for a single movie, for example, and the characters often paused in the middle of battle for philosophical conversations. The movie lets the philosophy hang more in the background, (even skipping the Nietzsche.)
The movie’s biggest weakness was the main set, which was just too pleasant looking to be as gritty as the characters regarded it. There are a few other world-building inconsistencies, but nothing on the scale of “Why didn’t the giant eagles just fly the ring to Mordor?” or “how does money work in Harry Potter?”
The movie has no shoe-horned-in political agenda–Alita never stops to whine about how women are treated in Iron City, for example, she just explodes with family-protecting violence. The plot is structured around class inequality, but this is a fairly believable backdrop since class is a real thing we all deal with in the real world. The movie does feature the “tiny girl who can beat up big bad guys” trope, but then, she is a battle droid made of metal, so Alita’s fighting skills make more sense than, say, River Tam’s.
Unfortunately, there are a few lose ends that are clearly supposed to carry into a sequel, which may not happen if all of the nay-sayers get their way. This makes the movie feel a touch unfinished–the story isn’t over.
So what’s with all of the bad reviews?
Over on Rotten Tomatoes, the critics gave the movie a 60% rating, while the movie going public has given it a 93% rating. That’s quite the split. Perhaps there are some movies that critics just don’t get, but certain fans love. But I note that other superhero movies, like Iron Man and Guardians of the Galaxy, received quite good reviews, despite the fact that pretty much all superhero movies are absurd if you think about them for too long. (GotG stars a raccoon, for goodness sake.)
Overall, if you like superhero/action movies, you will probably like Alita.
So why did I like the manga so much?
In part, it was just timing–I had a Japanese friend and we liked to hang out and watch anime together. In part it was artistic–Alita is a lovely character, and as a young female, I was smitten both with her cyborg good looks and the fact that she looks more like me than most superheroes. I spent much of my youth drawing cyborg girls of my own. Beyond that, it’s hard to say–sometimes you just like something.
Watson parses questions into different keywords and sentence fragments in order to find statistically related phrases. Watson’s main innovation was not in the creation of a new algorithm for this operation but rather its ability to quickly execute hundreds of proven language analysis algorithms simultaneously. The more algorithms that find the same answer independently the more likely Watson is to be correct. Once Watson has a small number of potential solutions, it is able to check against its database to ascertain whether the solution makes sense or not.
That is at least one reason why Watson represents such a significant milestone: Jeopardy! is precisely such a challenging language task. … What is perhaps not evident to many observers is that Watson not only had to master the language in the unexpected and convoluted queries, but for the most part its knowledge was not hand-coded. It obtained that knowledge by actually reading 200 million pages of natural-language documents, including all of Wikipedia… If Watson can understand and respond to questions based on 200 million pages–in three seconds!–here is nothing to stop similar systems from reading the other billions of documents on the Web. Indeed, that effort is now under way.
A point about the history of computing that may be petty of me to emphasize:
Babbage’s conception is quite miraculous when you consider the era in which he lived and worked. However, by the mid-twentieth century, his ideas had been lost in the mists of time (although they were subsequently rediscovered.) It was von Neumann who conceptualized and articulated the key principles of the computer as we know it today, and the world recognizes this by continuing to refer to the von Neumann machine as the principal model of computation. Keep in mind, though, that the von Neumann machine continually communicates data between its various units and within those units, so it could not be built without Shannon’s theorems and the methods he devised for transmitting and storing reliable digital information. …
You know what? No, it’s not petty.
Amazon lists 57 books about Ada Lovelace aimed at children, 14 about Alan Turing, and ZERO about John von Neumann.
(Some of these results are always irrelevant, but they are roughly correct.)
“EvX,” you may be saying, “Why are you counting children’s books?”
Because children are our future, and the books that get published for children show what society deems important for children to learn–and will have an effect on what adults eventually know.
I don’t want to demean Ada Lovelace’s role in the development of software, but surely von Neumann’s contributions to the field are worth a single book!
*Slides soapbox back under the table*
Anyway, back to Kurzweil, now discussing quantum mechanics:
There are two ways to view the questions we have been considering–converse Western an Easter perspective on the nature of consciousness and of reality. In the Western perspective, we start with a physical world that evolves patterns of information. After a few billion years of evolution, the entities in that world have evolved sufficiently to become conscious beings In the Eastern view, consciousness is the fundamental reality, the physical word only come into existence through the thoughts of conscious beings. …
The East-West divide on the issue of consciousness has also found expression in opposing schools of thought in the field of subatomic physics. In quantum mechanics, particles exist in what are called probability fields. Any measurement carried out on them by a measuring device causes what is called a collapse of the wave function, meaning that the particle suddenly assumes a particular location. A popular view is that such a measurement constitutes observation by a conscious observer… Thus the particle assume a particular location … only when it is observed. Basically particles figure that if no one is bothering to look at them, they don’t need to decide where they are. I call this the Buddhist school of quantum mechanics …
Or as Niels Bohr put it, “A physicist is just an atom’s way of looking at itself.” He also claimed that we could describe electrons exercised free will in choosing their positions, a statement I do not think he meant literally; “We must be clear that when it comes to atoms, language can be used only as in poetry,” as he put it.
Kurzweil explains the Western interpretation of quantum mechanics:
There is another interpretation of quantum mechanics… In this analysis, the field representing a particle is not a probability field, but rather just a function that has different values in different locations. The field, therefore, is fundamentally what the particle is. … The so-called collapse of the wave function, this view holds, is not a collapse at all. … It is just that a measurement device is also made up of particles with fields, and the interaction of the particle field being measured and the particle fields of the measuring device result in a reading of the particle being in a particular location. The field, however, is still present. This is the Western interpretation of quantum mechanics, although it is interesting to note that the more popular view among physicists worldwide is what I have called the Eastern interpretation.
For example, Bohr has the yin-yang symbol on his coat of arms, along with the motto contraria sunt complementa, or contraries are complementary. Oppenheimer was such a fan of the Bhagavad Gita that he read it in Sanskrit and quoted it upon successful completion of the Trinity Test, “If the radiance of a thousand suns were to burst at once into the sky, that would be like the splendor of the mighty one,” and “Now I am become death, the destroyer of worlds.” He credited the Gita as one of the most important books in his life.
Why the appeal of Eastern philosophy? Is it something about physicists and mathematicians? Leibnitz, after all, was fond of the I Ching. As Wikipedia says:
Leibniz was perhaps the first major European intellectual to take a close interest in Chinese civilization, which he knew by corresponding with, and reading other works by, European Christian missionaries posted in China. Having read Confucius Sinarum Philosophus on the first year of its publication, he concluded that Europeans could learn much from the Confucian ethical tradition. He mulled over the possibility that the Chinese characters were an unwitting form of his universal characteristic. He noted with fascination how the I Ching hexagrams correspond to the binary numbers from 000000 to 111111, and concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophical mathematics he admired. Leibniz communicated his ideas of the binary system representing Christianity to the Emperor of China hoping it would convert him. Leibniz may be the only major Western philosopher who attempted to accommodate Confucian ideas to prevailing European beliefs.
Leibniz’s attraction to Chinese philosophy originates from his perception that Chinese philosophy was similar to his own. The historian E.R. Hughes suggests that Leibniz’s ideas of “simple substance” and “pre-established harmony” were directly influenced by Confucianism, pointing to the fact that they were conceived during the period that he was reading Confucius Sinarum Philosophus.
Perhaps it is just that physicists and mathematicians are naturally curious people, and Eastern philosophy is novel to a Westerner, or perhaps by adopting Eastern ideas, they were able to purge their minds of earlier theories of how the universe works, creating a blank space in which to evaluate new data without being biased by old conceptions–or perhaps it is just something about the way their minds work.
As for quantum, I favor the de Broglie-Bohm interpretation of quantum mechanics, but obviously I am not a physicist and my opinion doesn’t count for much. What do you think?
But back to the book. If you are fond of philosophical ruminations on the nature of consciousness, like “What if someone who could only see in black and white read extensively about color “red,” could they ever achieve the qualia of actually seeing the color red?” or “What if a man were locked in a room with a perfect Chinese rulebook that told him which Chinese characters to write in response to any set of characters written on notes passed under the door? The responses are be in perfect Chinese, but the man himself understands not a word of Chinese,” then you’ll enjoy the discussion. If you already covered all of this back in Philosophy 101, you might find it a bit redundant.
Kurzweil notes that conditions have improved massively over the past century for almost everyone on earth, but people are increasingly anxious:
A primary reason people believe life is getting worse is because our information about the problems of the world has steadily improved. If there is a battle today somewhere on the planet, we experience it almost as if we were there. During World War II, tens of thousand of people might perish in a battle, and if the public could see it at all it was in a grainy newsreel in a movie theater weeks later. During World War I a small elite could read about the progress of the conflict in the newspaper (without pictures.) During the nineteenth century there was almost no access to news in a timely fashion for anyone.
As for the future of man, machines, and code, Kurzweil is even more optimistic than Auerswald:
The last invention that biological evolution needed to make–the neocortex–is inevitably leading to the last invention that humanity needs to make–truly intelligent machines–and the design of one is inspiring the other. … by the end of this century we will be able to create computation at the limits of what is possible, based on the laws of physics… We call matter and energy organized in this way “computronium” which is vastly more powerful pound per pound than the human brain. It will not jut be raw computation but will be infused with intelligent algorithms constituting all of human-machine knowledge. Over time we will convert much of the mass and energy in our tiny corner of the galaxy that is suitable for this purpose to computronium. … we will need to speed out to the rest of the galaxy and universe. …
How long will it take for us to spread our intelligence in its nonbiological form throughout the universe? … waking up the universe, and then intelligently deciding its fate by infusing it with our human intelligence in its nonbiological form, is our destiny.
Whew! That is quite the ending–and with that, so will we. I hope you enjoyed the book. What did you think of it? Will Humanity 2.0 be good? Bad? Totally different? Or does the Fermi Paradox imply that Kurzweil is wrong? Did you like this shorter Book Club format? And do you have any ideas for our next Book Club pick?
If you aren’t familiar with Ray Kurzweil (you must be new to the internet), he is a computer scientist, inventor, and futurist whose work focuses primarily on artificial intelligence and phrases like “technological singularity.”
Wikipedia really likes him.
The book is part neuroscience, part explanations of how various AI programs work. Kurzweil uses models of how the brain works to enhance his pattern-recognition programs, and evidence from what works in AI programs to build support for theories on how the brain works.
The book delves into questions like “What is consciousness?” and “Could we recognize a sentient machine if we met one?” along with a brief history of computing and AI research.
My core thesis, which I call the Law of Accelerating Returns, (LOAR), is that fundamental measures of of information technology follow predictable and exponential trajectories…
The quintessential example of the law of accelerating returns is the perfectly smooth, doubly exponential growth of the price/performance of computation, which has held steady for 110 years through two world was, the Great Depression, the Cold War, the collapse of the Soviet Union, the reemergence of China, the recent financial crisis, … Some people refer to this phenomenon as “Moore’s law,” but… [this] is just one paradigm among many.
Auerswald claims that the advance of “code” (that is, technologies like writing that allow us to encode information) has, for the past 40,000 years or so, supplemented and enhanced human abilities, making our lives better. Auerswald is not afraid of increasing mechanization and robotification of the economy putting people out of jobs because he believes that computers and humans are good at fundamentally different things. Computers, in fact, were invented to do things we are bad at, like decode encryption, not stuff we’re good at, like eating.
The advent of computers, in his view, lets us concentrate on the things we’re good at, while off-loading the stuff we’re bad at to the machines.
Kurzweil’s view is different. While he agrees that computers were originally invented to do things we’re bad at, he also thinks that the computers of the future will be very different from those of the past, because they will be designed to think like humans.
A computer that can think like a human can compete with a human–and since it isn’t limited in its processing power by pelvic widths, it may well out-compete us.
But Kurzweil does not seem worried:
Ultimately we will create an artificial neocortex that has the full range and flexibility of its human counterpart. …
When we augment our own neocortex with a synthetic version, we won’t have to worry about how much additional neocortex can physically fit into our bodies and brains, as most of it will be in the cloud, like most of the computing we use today. I estimated earlier that we have on the order of 300 million pattern recognizers in our biological neocortex. That’s as much as could b squeezed into our skulls even with the evolutionary innovation of a large forehead and with the neocortex taking about 80 percent of the available space. As soon as we start thinking in the cloud, there will be no natural limits–we will be able to use billions or trillions of pattern recognizers, basically whatever we need. and whatever the law of accelerating returns can provide at each point in time. …
Last but not least, we will be able to back up the digital portion of our intelligence. …
That is kind of what I already do with this blog. The downside is that sometimes you people see my incomplete or incorrect thoughts.
On the squishy side, Kurzweil writes of the biological brain:
The story of human intelligence starts with a universe that is capable of encoding information. This was the enabling factor that allowed evolution to take place. …
The story of evolution unfolds with increasing levels of abstraction. Atoms–especially carbon atoms, which can create rich information structures by linking in four different directions–formed increasingly complex molecules. …
A billion yeas later, a complex molecule called DNA evolved, which could precisely encode lengthy strings of information and generate organisms described by these “programs”. …
The mammalian brain has a distinct aptitude not found in any other class of animal. We are capable of hierarchical thinking, of understanding a structure composed of diverse elements arranged in a pattern, representing that arrangement with a symbol, and then using that symbol as an element in a yet more elaborate configuration. …
Through an unending recursive process we are capable of building ideas that are ever more complex. … Only Homo sapiens have a knowledge base that itself evolves, grow exponentially, and is passe down from one generation to another.
Kurzweil proposes an experiment to demonstrate something of how our brains encode memories: say the alphabet backwards.
If you’re among the few people who’ve memorized it backwards, try singing “Twinkle Twinkle Little Star” backwards.
It’s much more difficult than doing it forwards.
This suggests that our memories are sequential and in order. They can be accessed in the order they are remembered. We are unable to reverse the sequence of a memory.
Funny how that works.
On the neocortex itself:
A critically important observation about the neocortex is the extraordinary uniformity of its fundamental structure. … In 1957 Mountcastle discovered the columnar organization of the neocortex. … [In 1978] he described the remarkably unvarying organization of the neocortex, hypothesizing that it was composed of a single mechanism that was repeated over and over again, and proposing the cortical column as the basic unit. The difference in the height of certain layers in different region noted above are simply differences in the amount of interconnectivity that the regions are responsible for dealing with. …
extensive experimentation has revealed that there are in fact repeating units within each column. It is my contention that the basic unit is a pattern organizer and that this constitute the fundamental component of the neocortex.
As I read, Kurzweil’s hierarchical models reminded me of Chomsky’s theories of language–both Ray and Noam are both associated with MIT and have probably conversed many times. Kurzweil does get around to discussing Chomsky’s theories and their relationship to his work:
Language is itself highly hierarchical and evolved to take advantage of the hierarchical nature of the neocortex, which in turn reflects the hierarchical nature of reality. The innate ability of human to lean the hierarchical structures in language that Noam Chomsky wrote about reflects the structure of of the neocortex. In a 2002 paper he co-authored, Chomsky cites the attribute of “recursion” as accounting for the unique language faculty of the human species. Recursion, according to Chomsky, is the ability to put together small parts into a larger chunk, and then use that chunk as a part in yet another structure, and to continue this process iteratively In this way we are able to build the elaborate structure of sentences and paragraphs from a limited set of words. Although Chomsky was not explicitly referring here to brain structure, the capability he is describing is exactly what the neocortex does. …
This sounds good to me, but I am under the impression that Chomsky’s linguistic theories are now considered outdated. Perhaps that is only his theory of universal grammar, though. Any linguistics experts care to weigh in?
The basis to Chomsky’s linguistic theory is rooted in biolinguistics, holding that the principles underlying the structure of language are biologically determined in the human mind and hence genetically transmitted. He therefore argues that all humans share the same underlying linguistic structure, irrespective of sociocultural differences. In adopting this position, Chomsky rejects the radical behaviorist psychology of B. F. Skinner which views the mind as a tabula rasa (“blank slate”) and thus treats language as learned behavior. Accordingly, he argues that language is a unique evolutionary development of the human species and is unlike modes of communication used by any other animal species. Chomsky’s nativist, internalist view of language is consistent with the philosophical school of “rationalism“, and is contrasted with the anti-nativist, externalist view of language, which is consistent with the philosophical school of “empiricism“.
Anyway, back to Kuzweil, who has an interesting bit about love:
Science has recently gotten into the act as well, and we are now able to identify the biochemical changes that occur when someone falls in love. Dopamine is released, producing feelings of happiness and delight. Norepinephrin levels soar, which lead to a racing heart and overall feelings of exhilaration. These chemicals, along with phenylethylamine, produce elevation, high energy levels, focused attention, loss of appetite, and a general craving for the object of one’s desire. … serotonin level go down, similar to what happens in obsessive-compulsive disorder….
If these biochemical phenomena sound similar to those of the flight-or-fight syndrome, they are, except that we are running toward something or someone; indeed, a cynic might say toward rather than away form danger. The changes are also fully consistent with those of the early phase of addictive behavior. … Studies of ecstatic religious experiences also show the same physical phenomena, it can be said that the person having such an experiences is falling in love with God or whatever spiritual connection on which they are focused. …
Religious readers care to weigh in?
Consider two related species of voles: the prairie vole and the montane vole. They are pretty much identical, except that the prairie vole has receptors for oxytocin and vasopressin, whereas the montane vole does not. The prairie vole is noted for lifetime monogamous relationships, while the montane vole resorts almost exclusively to one-night stands.
Learning by species:
A mother rat will build a nest for her young even if she has never seen another rat in her lifetime. Similarly, a spider will spin a web, a caterpillar will create her own cocoon, and a beaver will build a damn, even if no contemporary ever showed them how to accomplish these complex tasks. That is not to say that these are not learned behavior. It is just that he animals did not learn them in a single lifetime… The evolution of animal behavior does constitute a learning process, but it is learning by the species, not by the individual and the fruits of this leaning process are encoded in DNA.
I think that’s enough for today; what did you think? Did you enjoy the book? Is Kurzweil on the right track with his pattern recognizers? Are non-biological neocortexes on the horizon? Will we soon convert the solar system to computronium?
Let’s continue this discussion next Monday–so if you haven’t read the book yet, you still have a whole week to finish.
Welcome back to EvX’s book club. Today we’re reading Chapter 11 of The Code Economy, Education.
…since the 1970s, the economically fortunate among us have been those who made the “go to college” choice. This group has seen its income row rapidly and its share of the aggregate wealth increase sharply. Those without a college education have watched their income stagnate and their share of the aggregate wealth decline. …
Middle-age white men without a college degree have been beset by sharply rising death rates–a phenomenon that contrasts starkly with middle-age Latino and African American men, and with trends in nearly every other country in the world.
It turns out that I have a lot of graphs on this subject. There’s a strong correlation between “white death” and “Trump support.”
White vs. non-white Americans
American whites vs. other first world nations
But “white men” doesn’t tell the complete story, as death rates for women have been increasing at about the same rate. The Great White Death seems to be as much a female phenomenon as a male one–men just started out with higher death rates in the first place.
Many of these are deaths of despair–suicide, directly or through simply giving up on living. Many involve drugs or alcohol. And many are due to diseases, like cancer and diabetes, that used to hit later in life.
We might at first think the change is just an artifact of more people going to college–perhaps there was always a sub-set of people who died young, but in the days before most people went to college, nothing distinguished them particularly from their peers. Today, with more people going to college, perhaps the destined-to-die are disproportionately concentrated among folks who didn’t make it to college. However, if this were true, we’d expect death rates to hold steady for whites overall–and they have not.
Whatever is affecting lower-class whites, it’s real.
Auerswald then discusses the “Permanent income hypothesis”, developed by Milton Friedman: Children and young adults devote their time to education, (even going into debt,) which allows us to get a better job in mid-life. When we get a job, we stop going to school and start saving for retirement. Then we retire.
The permanent income hypothesis is built into the very structure of our society, from Public Schools that serve students between the ages of 5 and 18, to Pell Grants for college students, to Social Security benefits that kick in at 65. The assumption, more or less, is that a one-time investment in education early in life will pay off for the rest of one’s life–a payout of such returns to scale that it is even sensible for students and parents to take out tremendous debt to pay for that education.
But this is dependent on that education actually paying off–and that is dependent on the skills people learn during their educations being in demand and sufficient for their jobs for the next 40 years.
The system falls apart if technology advances and thus job requirements change faster than once every 40 years. We are now looking at a world where people’s investments in education can be obsolete by the time they graduate, much less by the time they retire.
Right now, people are trying to make up for the decreasing returns to education (a highschool degree does not get you the same job today as it did in 1950) by investing more money and time into the single-use system–encouraging toddlers to go to school on the one end and poor students to take out more debt for college on the other.
This is probably a mistake, given the time-dependent nature of the problem.
The obvious solution is to change how we think of education and work. Instead of a single, one-time investment, education will have to continue after people begin working, probably in bursts. Companies will continually need to re-train workers in new technology and innovations. Education cannot be just a single investment, but a life-long process.
But that is hard to do if people are already in debt from all of the college they just paid for.
Auerswald then discusses some fascinating work by Bessen on how the industrial revolution affected incomes and production among textile workers:
… while a handloom weaver in 1800 required nearly forty minutes to weave a yard of coarse cloth using a single loom, a weaver in 1902 could do the same work operating eighteen Nothrop looms in less than a minute, on average. This striking point relates to the relative importance of the accumulation of capital to the advance of code: “Of the roughly thirty-nine-minute reduction in labor time per yard, capital accumulation due to the changing cost of capital relative to wages accounted for just 2 percent of the reduction; invention accounted for 73 percent of the reduction; and 25 percent of the time saving came from greater skill and effort of the weavers.” … “the role of capital accumulation was minimal, counter to the conventional wisdom.”
Then Auerswald proclaims:
What was the role of formal education in this process? Essentially nonexistent.
New technologies are simply too new for anyone to learn about them in school. Flexible thinkers who learn fast (generalists) thus benefit from new technologies and are crucial for their early development. Once a technology matures, however, it becomes codified into platforms and standards that can be taught, at which point demand for generalists declines and demand for workers with educational training in the specific field rises.
For Bessen, formal education and basic research are not the keys to the development of economies that they are often represented a being. What drives the development of economies is learning by doing and the advance of code–processes that are driven at least as much by non-expert tinkering as by formal research and instruction.
Make sure to read the endnotes to this chapter; several of them are very interesting. For example, #3 begins:
“Typically, new technologies demand that a large number of variables be properly controlled. Henry Bessemer’s simple principle of refining molten iron with a blast of oxygen work properly only at the right temperatures, in the right size vessel, with the right sort of vessel refractory lining, the right volume and temperature of air, and the right ores…” Furthermore, the products of these factories were really one that, in the United States, previously had been created at home, not by craftsmen…
“Early-stage technologies–those with relatively little standardized knowledge–tend to be used at a smaller scale; activity is localized; personal training and direct knowledge sharing are important, and labor markets do not compensate workers for their new skills. Mature technologies–with greater standardized knowledge–operate at large scale and globally, market permitting; formalized training and knowledge are more common; and robust labor markets encourage workers to develop their own skills.” … The intensity of of interactions that occur in cities is also important in this phase: “During the early stages, when formalized instruction is limited, person-to-person exchange is especially important for spreading knowledge.”
The ideal Head Girl is an all-rounder: performs extremely well in all school subjects and has a very high Grade Point Average. She is excellent at sports, Captaining all the major teams. She is also pretty, popular, sociable and well-behaved.
The Head Girl will probably be a big success in life…
But the Head Girl is not, cannot be, a creative genius.
Modern society is run by Head Girls, of both sexes, hence there is no place for the creative genius.
Modern Colleges aim at recruiting Head Girls, so do universities, so does science, so do the arts, so does the mass media, so does the legal profession, so does medicine, so does the military…
And in doing so, they filter-out and exclude creative genius.
Creative geniuses invent new technologies; head girls oversee the implementation and running of code. Systems that run on code can run very smoothly and do many things well, but they are bad at handling creative geniuses, as many a genius will inform you of their public school experience.
How different stages in the adoption of new technology and its codification into platforms translates into wages over time is a subject I’d like to see more of.
Auerswald then turns to the perennial problem of what happens when not only do the jobs change, they entirely disappear due to increasing robotification:
Indeed, many of the frontier business models shaping the economy today are based on enabling a sharp reduction in the number of people required to perform existing tasks.
One possibility Auerswald envisions is a kind of return to the personalized markets of yesteryear, when before massive industrial giants like Walmart sprang up. Via internet-based platforms like Uber or AirBNB, individuals can connect directly with people who’d like to buy their goods or services.
Since services make up more than 84% of the US economy and an increasingly comparable percentage in coutnries elsewhere, this is a big deal.
It’s easy to imagine this future in which we are all like some sort of digital Amish, continually networked via our phones to engage in small transactions like sewing a pair of trousers for a neighbor, mowing a lawn, selling a few dozen tacos, or driving people to the airport during a few spare hours on a Friday afternoon. It’s also easy to imagine how Walmart might still have massive economies of scale over individuals and the whole system might fail miserably.
However, if we take the entrepreneurial perspective, such enterprises are intriguing. Uber and Airbnb work by essentially “unlocking” latent assets–time when people’s cars or homes were sitting around unused. Anyone who can find other, similar latent assets and figure out how to unlock them could become similarly successful.
I’ve got an underutilized asset: rural poor. People in cities are easy to hire and easy to direct toward educational opportunities. Kids growing up in rural areas are often out of the communications loop (the internet doesn’t work terribly well in many rural areas) and have to drive a long way to job interviews.
In general, it’s tough to network large rural areas in the same ways that cities get networked.
On the matter of why peer-to-peer networks have emerged in certain industries, Auerswald makes a claim that I feel compelled to contradict:
The peer-to-peer business models in local transportation, hospitality, food service, and the rental of consumer goods were the first to emerge, not because they are the most important for the economy but because these are industries with relatively low regulatory complexity.
No no no!
Food trucks emerged because heavy regulations on restaurants (eg, fire code, disability access, landscaping,) have cut significantly into profits for restaurants housed in actual buildings.
Uber emerged because the cost of a cab medallion–that is, a license to drive a cab–hit 1.3 MILLION DOLLARS in NYC. It’s a lucrative industry that people were being kept out of.
In contrast, there has been little peer-to-peer business innovation in healthcare, energy, and education–three industries that comprise more than a quarter of the US GDP–where regulatory complexity is relatively high.
There is a ton of competition in healthcare; just look up naturopaths and chiropractors. Sure, most of them are quacks, but they’re definitely out there, competing with regular doctors for patients. (Midwives appear to be actually pretty effective at what they do and significantly cheaper than standard ob-gyns.)
The difficulty with peer-to-peer healthcare isn’t regulation but knowledge and equipment. Most Americans own a car and know how to drive, and therefore can join Uber. Most Americans do not know how to do heart surgery and do not have the proper equipment to do it with. With training I might be able to set a bone, but I don’t own an x-ray machine. And you definitely don’t want me manufacturing my own medications. I’m not even good at making soup.
Education has tons of peer-to-peer innovation. I homeschool my children. Sometimes grandma and grandpa teach the children. Many homeschoolers join consortia that offer group classes, often taught by a knowledgeable parent or hired tutor. Even people who aren’t homeschooling their kids often hire tutors, through organizations like Wyzant or afterschool test-prep centers like Kumon. One of my acquaintances makes her living primarily by skype-tutoring Koreans in English.
And that’s not even counting private schools.
Yes, if you want to set up a formal “school,” you will encounter a lot of regulation. But if you just want to teach stuff, there’s nothing stopping you except your ability to find students who’ll pay you to learn it.
Now, energy is interesting. Here Auerswsald might be correct. I have trouble imagining people setting up their own hydroelectric dams without getting into trouble with the EPA (not to mention everyone downstream.)
But what if I set up my own windmill in my backyard? Can I connect it to the electric grid and sell energy to my neighbors on a windy day? A quick search brings up WindExchange, which says, very directly:
Owners of wind turbines interconnected directly to the transmission or distribution grid, or that produce more power than the host consumes, can sell wind power as well as other generation attributes.
So, maybe you can’t set up your own nuclear reactor, and maybe the EPA has a thing about not disturbing fish, but it looks like you can sell wind and solar energy back to the grid.
I find this a rather exciting thought.
Ultimately, while Auerswald does return to and address the need to radically change how we think about education and the education-job-retirement lifepath, he doesn’t return to the increasing white death rate. Why are white death rates increasing faster than other death rates, and will transition to the “gig economy” further accelerate this trend? Or was the past simply anomalous for having low white death rates, or could these death rates be driven by something independent of the economy itself?
Now, it’s getting late, so that’s enough for tonight, but what are your thoughts? How do you think this new economy–and educational landscape–will play out?
Welcome back to EvX’s Book Club. Today we start the third (and final) part of Auerswald’s The Code Economy: The Human Advantage.
Chapter 10: Complementarity discuses bifurcation, a concept Auerswald mentions frequently throughout the book. He has a graph of the process of bifurcation, whereby the development of new code (ie, technology), leads to the creation of a new “platform” on the one hand, and new human work on the other. With each bifurcation, we move away from the corner of the graph marked “simplicity” and “autonomy,” and toward the corner marked “complexity” and “interdependence.” It looks remarkably like a graph I made about energy inputs vs outputs at different complexity levels, based on a memory of a graph I saw in a textbook some years ago.
There are some crucial differences between our two graphs, but I think they nonetheless related–and possibly trying to express the same thing.
Auerswald argues that as code becomes platform, it doesn’t steal jobs, but becomes the new base upon which people work. The Industrial Revolution eliminated the majority of farm laborers via automation, but simultaneously provided new jobs for them, in factories. Today, the internet is the “platform” where jobs are being created, not in building the internet, but via businesses like Uber that couldn’t exist without the internet.
Auerswald’s graph (not mine) is one of the few places in the book where he comes close to examining the problem of intelligence. It is difficult to see what unintelligent people are going to do in a world that is rapidly becoming more complicated.
On the other hand people who didn’t have access to all sorts of resources now do, due to internet-based platforms–people in the third world, for example, who never bought land-line telephones because their country couldn’t afford to build the infrastructure to support them, are snapping up mobile and smartphones at an extraordinary rate:
And overwhelming majorities in almost every nation surveyed report owning some form of mobile device, even if they are not considered “smartphones.”
And just like Auerswald’s learning curves from the last chapter, technological spread is speeding up. It took the landline telephone 64 years to go from 0% to 40% of the US market. Mobile phones took only 20 years to accomplish the same feat, and smartphones did it in about 10. (source.)
There are now more mobile phones in the developing world than the first world, and people aren’t just buying just buying these phones to chat. People who can’t afford to open bank accounts now use their smarphones as “mobile wallets”:
According to the GSMA, an industry group for the mobile communications business, there are now 79 mobile money systems globally, mostly in Africa and Asia. Two-thirds of them have been launched since 2009.
To date, the most successful example is M-Pesa, which Vodafone launched in Kenya in 2007. A little over three years later, the service has 13.5 million users, who are expected to send 20 percent of the country’s GDP through the system this year. “We proved at Vodafone that if you get the proposition right, the scale-up is massive,” says Nick Hughes, M-Pesa’s inventor.
But let’s get back to Auerswald. Chapter 10 contains a very interesting description of the development of the development of the Swiss Watch industry. Of course, today, most people don’t go out of their way to buy watches, since their smartphones have clocks built into them. Have smartphones put the Swiss out of business? Not quite, says Auerswald:
Switzerland… today produces fewer than 5 percent of the timepieces manufactured for export globally. In 2014, Switzerland exported 29 million watches, as compaed to China’ 669 million… But what of value? … Swiss watch exports were worth 24.3 billion in 2014, nearly five times as much as all Chinese watches combined.
Aside from the previously mentioned bifurcation of human and machine labor, Auerswald suggests that automation bifurcates products into cheap and expensive ones. He claims that movies, visual art services (ie, copying and digitization of art vs. fine art,) and music have also undergone bifurcation, not extinction, due to new technology.
In each instance, disruptive advances in code followed a consistent and predictable pattern: the creation of a new high-volume, low-price option creates a new market for the low-volume, high-price option. Every time this happens, the new value created through improved code forces a bifurcation of markets, and of work.
He then discusses a watch-making startup located in Detroit, which I feel completely and totally misses the point of whatever economic lessons we can draw from Detroit.
Detroit is, at least currently, a lesson in how people fail to deal with increasing complexity, much less bifurcation.
Even that word–bifurcation–contains a problem: what happens to the middle? A huge mass of people at the bottom, making and consuming cheap products, and a small class at the top, making and consuming expensive products–well I will honor the demonstrated preferences of everyone involved for stuff, of whatever price, but what about the middle?
Is this how the middle class dies?
But if the poor become rich enough… does it matter?
Because work is fundamentally algorithmic, it is capable of almost limitless diversification though both combinatorial and incremental change. The algorithms of work become, fairly literally, the DNA of the economy. …
As Geoff Moore puts it, “Digital innovation is reengineering our manufacturing-based product-centric economy to improve quality, reduce cost, expand markets, … It is doing so, however, largely at the expense of traditional middle class jobs. This class of work is bifurcating into elite professions that are highly compensated but outside the skillset of the target population and commoditizing workloads for which the wages fall well below the target level.”
It is easy to take the long view and say, “Hey, the agricultural revolution didn’t result in massive unemployment among hunter-gatherers; the bronze and iron ages didn’t result in unemployed flint-knappers starving in the streets, so we’ll probably survive the singularity, too,” and equally easy to take the short view and say, “screw the singularity, I need a job that pays the bills now.”
Auerswald then discusses the possibilities for using big data and mobile/wearable computers to bring down healthcare costs. I am also in the middle of a Big Data reading binge, and my general impression of health care is that there is a ton of data out there (and more being collected every day,) but it is unwieldy and disorganized and doctors are too busy to use most of it and patients don’t have access to it. and if someone can amass, organize, and sort that data in useful ways, some very useful discoveries could be made.
Then we get to the graph that I didn’t understand,”Trends in Nonroutine Task Input, 1960 to 1998,” which is a bad sign for my future employment options in this new economy.
My main question is what is meant by “nonroutine manual” tasks, and since these were the occupations with the biggest effect shown on the graph, why aren’t they mentioned in the abstract?:
We contend that computer capital (1) substitutes for a limited and well-defined set of human activities, those involving routine (repetitive) cognitive and manual tasks; and (2) complements activities involving non-routine problem solving and interactive tasks. …Computerization is associated with declining relative industry demand for routine manual and cognitive tasks and increased relative demand for non-routine cognitive tasks.
Yes, but what about the non-routine manual? What is that, and why did it disappear first? And does this graph account for increased offshoring of manufacturing jobs to China?
If you ask me, it looks like there are three different events recorded in the graph, not just one. First, from 1960 onward, “non-routine manual” jobs plummet. Second, from 1960 through 1970, “routine cognitive” and “routine manual” jobs increase faster than “non-routine analytic” and “non-routine interactive.” Third, from 1980 onward, the routine jobs head downward while the analytic and interactive jobs become more common.
*Downloads the PDF and begins to read* Here’s the explanation of non-routine manual:
Both optical recognition of objects in a visual field and bipedal locomotion across an uneven surface appear to require enormously sophisticated algorithms, the one in optics and the other in mechanics, which are currently poorly understood by cognitive science (Pinker, 1997). These same problems explain the earlier mentioned inability of computers to perform the tasks of long haul truckers.
In this paper we refer to such tasks requiring visual and manual skills as ‘non-routine manual activities.’
This does not resolve the question.
Discussion from the paper:
Trends in routine task input, both cognitive and manual, also follow a striking pattern. During the 1960s, both forms of input increased due to a combination of between- and within-industry shifts. In the 1970s, however, within-industry input of both tasks declined, with the rate of decline accelerating.
As distinct from the other four task measures, we observe steady within- and between-industry shifts against non-routine manual tasks for the entire four decades of our sample. Since our conceptual framework indicates that non-routine manual tasks are largely orthogonal to computerization, we view
this pattern as neither supportive nor at odds with our model.
Now, it’s 4 am and the world is swimming a bit, but I think “we aren’t predicting any particular effect on non-routine manual tasks” should have been stated up front in the thesis portion. Sticking it in here feels like ad-hoc explaining away of a discrepancy. “Well, all of the other non-routine tasks went up, but this one didn’t, so, well, it doesn’t count because they’re hard to computerize.”
Anyway, the paper is 62 pages long, including the tables and charts, and I’m not reading it all or second-guessing their math at this hour, but I feel like there is something circular in all of this–“We already know that jobs involving routine labor like manufacturing are down, so we made a models saying they decreased as a percent of jobs because of computers and automation, looked through jobs data, and low and behold, found that they had decreased. Confusingly, though, we also found that non-routine manual jobs decreased during this time period, even though they don’t lend themselves to automation and computerization.”
I also searched in the document and could find no instance of the words “offshor-” “China” “export” or “outsource.”
Also, the graph Auerswald uses and the corresponding graph in the paper have some significant differences, especially the “routine cognitive” line. Maybe the authors updated their graph with more data, or Auerswald was trying to make the graph clearer. I don’t know.
Whatever is up with this paper, I think we may provisionally accept its data–fewer factory workers, more lawyers–without necessarily accepting its model.
The day after I wrote this, I happened to be reading Davidowitz’s Everybody Lies: Big Data, New Data, and What the Internet Can Tell us about who we Really Are, which has a discussion of the best places to raise children.
Talking about Chetty’s data, Davidowitz writes:
The question asked: what is the chance that a person with parents in the bottom 20 percent of the income distribution reaches the top 20 percent of the income distribution? …
So what is it about part of the United States where there is high income mobility? What makes some places better at equaling the playing field, of allowing a poor kid to have a pretty good life? Areas that spend more on education provide a better chance to poor kids. Places with more religious people and lower crime do better. Places with more black people do worse. Interestingly, this has an effect on not just the black kids but on the white kids living there as well.
Here is Chetty’s map of upward mobility (or the lack thereof) by county. Given how closely it matches a map of “African Americans” + “Native Americans” I have my reservations about the value of Chetty’s research on the bottom end (is anyone really shocked to discover that black kids enjoy little upward mobility?) but it still has some comparative value.
Davidowitz then discusses Chetty’s analysis of where people live the longest:
Interestingly, for the wealthiest Americans, life expectancy is hardly affected by where you live. …
For the poorest Americans, life expectancy varies tremendously…. living in the right place can add five years to a poor person’s life expectancy. …
religion, environment, and health insurance–do not correlate with longer life spans for the poor. The variable that does matter, according to Chetty and the others who worked on this study? How many rich people live in a city. More rich people in a city means the poor there live longer. Poor people in New York City, for example, live longer than poor people in Detroit.
Davidowitz suggests that maybe this happens because the poor learn better habits from the rich. I suspect the answer is simpler–here are a few possibilities:
1. The rich are effectively stopping the poor from doing self-destructive things, whether positively, eg, funding cultural that poor people go to rather than turn to drugs or crime out of boredom, or negatively, eg, funding police forces that discourage life-shortening crime.
2. The rich fund/support projects that improve general health, like cleaner water systems or better hospitals.
3. The effect is basically just a measurement error that doesn’t account for rich people driving up land prices. The “poor” of New York would be wealthier if they had Detroit rents.
(In general, I think Davidowitz is stronger when looking for correlations in the data than when suggesting explanations for it.)
Now contrast this with Davidowitz’s own study on where top achievers grow up:
I was curious where the most successful Americans come from, so one day I decided to download Wikipedia. …
[After some narrowing for practical reasons] Roughly 2,058 American-born baby boomers were deemed notable enough to warrant a Wikipedia entry. About 30 percent made it through achievements in art or entertainment, 29 percent through sports, 9 percent via politics, and 3 percent in academia or science.
And this is why we are doomed.
The first striking fact I noticed in the data was the enormous geographic variation in the likelihood of becoming a big success …
Roughly one in 1,209 baby boomers born in California reached Wikipedia. Only one in 4,496 baby boomers born in West Virginia did. … Roughly one in 748 baby boomers born in Suffolk County, MA, here Boston is located, made it to Wikipedia. In some counties, the success rate was twenty times lower. …
I closely examined the top counties. It turns out that nearly all of them fit into one of two categories.
First, and this surprised me, many of these counties contained a sizable college town. …
I don’t know why that would surprise anyone. But this was interesting:
Of fewer than 13,000 boomers born in Macon County, Alabama, fifteen made it to Wikipedia–or one in 852. Every single one of them is black. Fourteen of them were from the town of Tuskegee, home of Tuskegee University, a historically black college founded by Booker . Washington. The list included judges, writers, and scientists. In fact, a black child born in Tuskegee had the same probability of becoming a notable in a field outside of ports as a white child born in some of the highest-scoring, majority-white college towns.
The other factor that correlates with the production of notables?
A big city.
Being born in born in San Francisco County, Los Angeles County, or New York City all offered among the highest probabilities of making it to Wikipedia. …
Suburban counties, unless they contained major college towns, performed far worse than their urban counterparts.
A third factor that correlates with success is the proportion of immigrants in a county, though I am skeptical of this finding because I’ve never gotten the impression that the southern border of Texas produces a lot of famous people.
Migrant farm laborers aside, though, America’s immigrant population tends to be pretty well selected overall and thus produces lots of high-achievers. (Steve Jobs, for example, was the son of a Syrian immigrant; Thomas Edison was the son of a Canadian refugee.)
The variable that didn’t predict notability:
One I found more than a little surprising was how much money a state spends on education. In states with similar percentages of its residents living in urban areas, education spending did not correlate with rates of producing notable writers, artists, or business leaders.
Of course, this is probably because 1. districts increase spending when students do poorly in school, and 2. because rich people in urban send their kids to private schools.
It is interesting to compare my Wikipedia study to one of Chetty’s team’s studies discussed earlier. Recall that Chetty’s team was trying to figure out what areas are good at allowing people to reach the upper middle class. My study was trying to figure out what areas are good at allowing people to reach fame. The results are strikingly different.
Spending a lot on education help kids reach the upper middle class. It does little to help them become a notable writer, artist, or business leader. Many of these huge successes hated school. Some dropped out.
Some, like Mark Zuckerberg, went to private school.
New York City, Chetty’s team found, is not a particularly good place to raise a child if you want to ensure he reaches the upper middle class. it is a great place, my study found, if you want to give him a chance at fame.
A couple of methodological notes:
Note that Chetty’s data not only looked at where people were born, but also at mobility–poor people who moved from the Deep South to the Midwest were also more likely to become upper middle class, and poor people who moved from the Midwest to NYC were also more likely to stay poor.
Davidowitz’s data only looks at where people were born; he does not answer whether moving to NYC makes you more likely to become famous. He also doesn’t discuss who is becoming notable–are cities engines to make the children of already successful people becoming even more successful, or are they places where even the poor have a shot at being famous?
I reject Davidowitz’s conclusions (which impute causation where there is only correlation) and substitute my own:
Cities are acceleration platforms for code. Code creates bifurcation. Bifurcation creates winners and losers while obliterating the middle.
This is not necessarily a problem if your alternatives are worse–if your choice is between poverty in NYC or poverty in Detroit, you may be better off in NYC. If your choice is between poverty in Mexico and poverty in California, you may choose California.
But if your choice is between a good chance of being middle class in Salt Lake City verses a high chance of being poor and an extremely small chance of being rich in NYC, you are probably a lot better off packing your bags and heading to Utah.
But if cities are important drivers of innovation (especially in science, to which we owe thanks for things like electricity and refrigerated food shipping,) then Auerswald has already provided us with a potential solution to their runaway effects on the poor: Henry George’s land value tax. As George accounts, one day, while overlooking San Francisco:
I asked a passing teamster, for want of something better to say, what land was worth there. He pointed to some cows grazing so far off that they looked like mice, and said, “I don’t know exactly, but there is a man over there who will sell some land for a thousand dollars an acre.” Like a flash it came over me that there was the reason of advancing poverty with advancing wealth. With the growth of population, land grows in value, and the men who work it must pay more for the privilege.
Alternatively, higher taxes on fortunes like Zuckerberg’s and Bezos’s might accomplish the same thing.
At least, this looks like a problem to me., especially when I’m trying to make conversation at the local moms group.
There are many potential reasons the data looks like this (including inaccuracy, though my lived experience says it is accurate.) Our culture encourages people to limit their fertility, and smart women are especially so encouraged. Smart people are also better at long-term planning and doing things like “reading the instructions on the birth control.”
But it seems likely that there is another factor, an arrow of causation pointing in the other direction: smart people tend to stay in school for longer, and people dislike having children while they are still in school. While you are in school, you are in some sense still a child, and we have a notion that children shouldn’t beget children.
People who drop out of school and start having children at 16 tend not to be very smart and also tend to have plenty of children during their child-creating years. People who pursue post-docs into their thirties tend to be very smart–and many of them are virgins.
Now, I don’t know about you, but I kind of like having smart people around, especially the kinds of people who invent refrigerators and make supply chains work so I can enjoy eating food, even though I live in a city, far from any farms. I don’t want to live in a world where IQ is crashing and we can no longer maintain complex technological systems.
We need to completely re-think this system where the smarter you are, the longer you are expected to stay in school, accruing debt and not having children.
Proposal one: Accelerated college for bright students. Let any student who can do college-level work begin college level work for college credits, even if they are still in high (or middle) school. There are plenty of bright students out there who could be completing their degrees by 18.
The entirely framework of schooling probably ought to be sped up in a variety of ways, especially for bright students. The current framework often reflects the order in which various discoveries were made, rather than the age at which students are capable of learning the material. For example, negative numbers are apparently not introduced in the math curriculum until 6th grade, even though, in my experience, even kindergarteners are perfectly capable of understanding the concept of “debt.” If I promise to give you one apple tomorrow, then I have “negative one apple.” There is no need to hide the concept of negatives for 6 years.
Proposal two: More apprenticeship.
In addition to being costly and time-consuming, a college degree doesn’t even guarantee that your chosen field will still be hiring when you graduate. (I know people with STEM degrees who graduated right as the dot.com bubble burst. Ouch.) We essentially want our educational system to turn out people who are highly skilled at highly specialized trades, and capable of turning around and becoming highly skilled at another highly specialized trade on a dime if that doesn’t work out. This leads to chemists returning to university to get law degrees; physicists to go back for medical degrees. We want students to have both “broad educations” so they can get hired anywhere, and “deep educations” so they’ll actually be good at their jobs.
Imagine, instead, a system where highschool students are allowed to take a two-year course in preparation for a particular field, at the end of which high performers are accepted into an apprenticeship program where the continue learning on the job. At worst, these students would have a degree, income, and job experience by the age of 20, even if they decided they now wanted to switch professions or pursue an independent education.
Proposal three: Make childbearing normal for adult students.
There’s no reason college students can’t get married and have children (aside from, obviously, their lack of jobs and income.) College is not more time consuming or physically taxing than regular jobs, and college campuses tend to be pretty pleasant places. Studying while pregnant isn’t any more difficult than working while pregnant.
Grad students, in particular, are old and mature enough to get married and start families, and society should encourage them to do so.
Proposal four: stop denigrating child-rearing, especially for intelligent women.
Children are a lot of work, but they’re also fun. I love being with my kids. They are my family and an endless source of happiness.
What people want and value, they will generally strive to obtain.