Book Club: How to Create a Mind, pt 2/2

Ray Kurzweil, writer, inventor, thinker

Welcome back to EvX’s Book Club. Today  are finishing Ray Kurzweil’s How to Create a Mind: The Secret of Human thought Revealed.

Spiders are interesting, but Kurzweil’s focus is computers, like Watson, which trounced the competition on Jeopardy!

I’ll let Wikipedia summarize Watson:

Watson was created as a question answering (QA) computing system that IBM built to apply advanced natural language processing, information retrieval, knowledge representation, automated reasoning, and machine learning technologies to the field of open domain question answering.[2]

The sources of information for Watson include encyclopedias, dictionaries, thesauri, newswire articles, and literary works. Watson also used databases, taxonomies, and ontologies. …

Watson parses questions into different keywords and sentence fragments in order to find statistically related phrases.[22] Watson’s main innovation was not in the creation of a new algorithm for this operation but rather its ability to quickly execute hundreds of proven language analysis algorithms simultaneously.[22][24] The more algorithms that find the same answer independently the more likely Watson is to be correct.[22] Once Watson has a small number of potential solutions, it is able to check against its database to ascertain whether the solution makes sense or not.[22]

Kurzweil opines:

That is at least one reason why Watson represents such a significant milestone: Jeopardy! is precisely such a challenging language task. … What is perhaps not evident to many observers is that Watson not only had to master the language in the unexpected and convoluted queries, but for the most part its knowledge was not hand-coded. It obtained that knowledge by actually reading 200 million pages of natural-language documents, including all of Wikipedia… If Watson can understand and respond to questions based on 200 million pages–in three seconds!–here is nothing to stop similar systems from reading the other billions of documents on the Web. Indeed, that effort is now under way.

A point about the history of computing that may be petty of me to emphasize:

Babbage’s conception is quite miraculous when you consider the era in which he lived and worked. However, by the mid-twentieth century, his ideas had been lost in the mists of time (although they were subsequently rediscovered.) It was von Neumann who conceptualized and articulated the key principles of the computer as we know it today, and the world recognizes this by continuing to refer to the von Neumann machine as the principal model of computation. Keep in mind, though, that the von Neumann machine continually communicates data between its various units and within those units, so it could not be built without Shannon’s theorems and the methods he devised for transmitting and storing reliable digital information. …

You know what? No, it’s not petty.

Amazon lists 57 books about Ada Lovelace aimed at children, 14 about Alan Turing, and ZERO about John von Neumann.

(Some of these results are always irrelevant, but they are roughly correct.)

“EvX,” you may be saying, “Why are you counting children’s books?”

Because children are our future, and the books that get published for children show what society deems important for children to learn–and will have an effect on what adults eventually know.

I don’t want to demean Ada Lovelace’s role in the development of software, but surely von Neumann’s contributions to the field are worth a single book!

*Slides soapbox back under the table*

Anyway, back to Kurzweil, now discussing quantum mechanics:

There are two ways to view the questions we have been considering–converse Western an Easter perspective on the nature of consciousness and of reality. In the Western perspective, we start with a physical world that evolves patterns of information. After a few billion years of evolution, the entities in that world have evolved sufficiently to become conscious beings In the Eastern view, consciousness is the fundamental reality, the physical word only come into existence through the thoughts of conscious beings. …

The East-West divide on the issue of consciousness has also found expression in opposing schools of thought in the field of subatomic physics. In quantum mechanics, particles exist in what are called probability fields. Any measurement carried out on them by a measuring device causes what is called a collapse of the wave function, meaning that the particle suddenly assumes a particular location. A popular view is that such a measurement constitutes observation by a conscious observer… Thus the particle assume a particular location … only when it is observed. Basically particles figure that if no one is bothering to look at them, they don’t need to decide where they are. I call this the Buddhist school of quantum mechanics …

Niels Bohr

Or as Niels Bohr put it, “A physicist is just an atom’s way of looking at itself.” He also claimed that we could describe electrons exercised free will in choosing their positions, a statement I do not think he meant literally; “We must be clear that when it comes to atoms, language can be used only as in poetry,” as he put it.

Kurzweil explains the Western interpretation of quantum mechanics:

There is another interpretation of quantum mechanics… In this analysis, the field representing a particle is not a probability field, but rather just a function that has different values in different locations. The field, therefore, is fundamentally what the particle is. … The so-called collapse of the wave function, this view holds, is not a collapse at all. … It is just that a measurement device is also made up of particles with fields, and the interaction of the particle field being measured and the particle fields of the measuring device result in a reading of the particle being in a particular location. The field, however, is still present. This is the Western interpretation of quantum mechanics, although it is interesting to note that the more popular view among physicists worldwide is what I have called the Eastern interpretation.

Soviet atomic bomb, 1951

For example, Bohr has the yin-yang symbol on his coat of arms, along with the motto contraria sunt complementa, or contraries are complementary. Oppenheimer was such a fan of the Bhagavad Gita that he read it in Sanskrit and quoted it upon successful completion of the Trinity Test, “If the radiance of a thousand suns were to burst at once into the sky, that would be like the splendor of the mighty one,” and “Now I am become death, the destroyer of worlds.” He credited the Gita as one of the most important books in his life.

Why the appeal of Eastern philosophy? Is it something about physicists and mathematicians? Leibnitz, after all, was fond of the I Ching. As Wikipedia says:

Leibniz was perhaps the first major European intellectual to take a close interest in Chinese civilization, which he knew by corresponding with, and reading other works by, European Christian missionaries posted in China. Having read Confucius Sinarum Philosophus on the first year of its publication,[153] he concluded that Europeans could learn much from the Confucian ethical tradition. He mulled over the possibility that the Chinese characters were an unwitting form of his universal characteristic. He noted with fascination how the I Ching hexagrams correspond to the binary numbers from 000000 to 111111, and concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophical mathematics he admired.[154] Leibniz communicated his ideas of the binary system representing Christianity to the Emperor of China hoping it would convert him.[84] Leibniz may be the only major Western philosopher who attempted to accommodate Confucian ideas to prevailing European beliefs.[155]

Leibniz’s attraction to Chinese philosophy originates from his perception that Chinese philosophy was similar to his own.[153] The historian E.R. Hughes suggests that Leibniz’s ideas of “simple substance” and “pre-established harmony” were directly influenced by Confucianism, pointing to the fact that they were conceived during the period that he was reading Confucius Sinarum Philosophus.[153]

Perhaps it is just that physicists and mathematicians are naturally curious people, and Eastern philosophy is novel to a Westerner, or perhaps by adopting Eastern ideas, they were able to purge their minds of earlier theories of how the universe works, creating a blank space in which to evaluate new data without being biased by old conceptions–or perhaps it is just something about the way their minds work.

As for quantum, I favor the de Broglie-Bohm interpretation of quantum mechanics, but obviously I am not a physicist and my opinion doesn’t count for much. What do you think?

But back to the book. If you are fond of philosophical ruminations on the nature of consciousness, like “What if someone who could only see in black and white read extensively about color “red,” could they ever achieve the qualia of actually seeing the color red?” or “What if a man were locked in a room with a perfect Chinese rulebook that told him which Chinese characters to write in response to any set of characters written on notes passed under the door? The responses are be in perfect Chinese, but the man himself understands not a word of Chinese,” then you’ll enjoy the discussion. If you already covered all of this back in Philosophy 101, you might find it a bit redundant.

Kurzweil notes that conditions have improved massively over the past century for almost everyone on earth, but people are increasingly anxious:

A primary reason people believe life is getting worse is because our information about the problems of the world has steadily improved. If there is a battle today somewhere on the planet, we experience it almost as if we were there. During World War II, tens of thousand of people might perish in a battle, and if the public could see it at all it was in a grainy newsreel in a movie theater weeks later. During World War I a small elite could read about the progress of the conflict in the newspaper (without pictures.) During the nineteenth century there was almost no access to news in a timely fashion for anyone.

As for the future of man, machines, and code, Kurzweil is even more optimistic than Auerswald:

The last invention that biological evolution needed to make–the neocortex–is inevitably leading to the last invention that humanity needs to make–truly intelligent machines–and the design of one is inspiring the other. … by the end of this century we will be able to create computation at the limits of what is possible, based on the laws of physics… We call matter and energy organized in this way “computronium” which is vastly more powerful pound per pound than the human brain. It will not jut be raw computation but will be infused with intelligent algorithms constituting all of human-machine knowledge. Over time we will convert much of the mass and energy in our tiny corner of the galaxy that is suitable for this purpose to computronium. … we will need to speed out to the rest of the galaxy and universe. …

How long will it take for us to spread our intelligence in its nonbiological form throughout the universe? … waking up the universe, and then intelligently deciding its fate by infusing it with our human intelligence in its nonbiological form, is our destiny.

Whew! That is quite the ending–and with that, so will we. I hope you enjoyed the book. What did you think of it? Will Humanity 2.0 be good? Bad? Totally different? Or does the Fermi Paradox imply that Kurzweil is wrong? Did you like this shorter Book Club format? And do you have any ideas for our next Book Club pick?

Book Club: How to Create a Mind by Ray Kurzweil pt 1/2

Welcome to our discussion of Ray Kurzweil’s How to Create a Mind: The Secret of Human thought Revealed. This book was requested by one my fine readers; I hope you have enjoyed it.

If you aren’t familiar with Ray Kurzweil (you must be new to the internet), he is a computer scientist, inventor, and futurist whose work focuses primarily on artificial intelligence and phrases like “technological singularity.”

Wikipedia really likes him.

The book is part neuroscience, part explanations of how various AI programs work. Kurzweil uses models of how the brain works to enhance his pattern-recognition programs, and evidence from what works in AI programs to build support for theories on how the brain works.

The book delves into questions like “What is consciousness?” and “Could we recognize a sentient machine if we met one?” along with a brief history of computing and AI research.

My core thesis, which I call the Law of Accelerating Returns, (LOAR), is that fundamental measures of of information technology follow predictable and exponential trajectories…

I found this an interesting sequel to Auerswald’s The Code Economy and counterpart to Gazzaniga’s Who’s In Charge? Free Will and the Science of the Brain, which I listened to in audiobook form and therefore cannot quote very easily. Nevertheless, it’s a good book and I recommend it if you want more on brains.

The quintessential example of the law of accelerating returns is the perfectly smooth, doubly exponential growth of the price/performance of computation, which has held steady for 110 years through two world was, the Great Depression, the Cold War, the collapse of the Soviet Union, the reemergence of China, the recent financial crisis, … Some people refer to this phenomenon as “Moore’s law,” but… [this] is just one paradigm among many.

From Ray Kurzweil,

Auerswald claims that the advance of “code” (that is, technologies like writing that allow us to encode information) has, for the past 40,000 years or so, supplemented and enhanced human abilities, making our lives better. Auerswald is not afraid of increasing mechanization and robotification of the economy putting people out of jobs because he believes that computers and humans are good at fundamentally different things. Computers, in fact, were invented to do things we are bad at, like decode encryption, not stuff we’re good at, like eating.

The advent of computers, in his view, lets us concentrate on the things we’re good at, while off-loading the stuff we’re bad at to the machines.

Kurzweil’s view is different. While he agrees that computers were originally invented to do things we’re bad at, he also thinks that the computers of the future will be very different from those of the past, because they will be designed to think like humans.

A computer that can think like a human can compete with a human–and since it isn’t limited in its processing power by pelvic widths, it may well out-compete us.

But Kurzweil does not seem worried:

Ultimately we will create an artificial neocortex that has the full range and flexibility of its human counterpart. …

When we augment our own neocortex with a synthetic version, we won’t have to worry about how much additional neocortex can physically fit into our bodies and brains, as most of it will be in the cloud, like most of the computing we use today. I estimated earlier that we have on the order of 300 million pattern recognizers in our biological neocortex. That’s as much as could b squeezed into our skulls even with the evolutionary innovation of a large forehead and with the neocortex taking about 80 percent of the available space. As soon as we start thinking in the cloud, there will be no natural limits–we will be able to use billions or trillions of pattern recognizers, basically whatever we need. and whatever the law of accelerating returns can provide at each point in time. …

Last but not least, we will be able to back up the digital portion of our intelligence. …

That is kind of what I already do with this blog. The downside is that sometimes you people see my incomplete or incorrect thoughts.

On the squishy side, Kurzweil writes of the biological brain:

The story of human intelligence starts with a universe that is capable of encoding information. This was the enabling factor that allowed evolution to take place. …

The story of evolution unfolds with increasing levels of abstraction. Atoms–especially carbon atoms, which can create rich information structures by linking in four different directions–formed increasingly complex molecules. …

A billion yeas later, a complex molecule called DNA evolved, which could precisely encode lengthy strings of information and generate organisms described by these “programs”. …

The mammalian brain has a distinct aptitude not found in any other class of animal. We are capable of hierarchical thinking, of understanding a structure composed of diverse elements arranged in a pattern, representing that arrangement with a symbol, and then using that symbol as an element in a yet more elaborate configuration. …

I really want to know if squids or octopuses can engage in symbolic thought.

Through an unending recursive process we are capable of building ideas that are ever more complex. … Only Homo sapiens have a knowledge base that itself evolves, grow exponentially, and is passe down from one generation to another.

Kurzweil proposes an experiment to demonstrate something of how our brains encode memories: say the alphabet backwards.

If you’re among the few people who’ve memorized it backwards, try singing “Twinkle Twinkle Little Star” backwards.

It’s much more difficult than doing it forwards.

This suggests that our memories are sequential and in order. They can be accessed in the order they are remembered. We are unable to reverse the sequence of a memory.

Funny how that works.

On the neocortex itself:

A critically important observation about the neocortex is the extraordinary uniformity of its fundamental structure. … In 1957 Mountcastle discovered the columnar organization of the neocortex. … [In 1978] he described the remarkably unvarying organization of the neocortex, hypothesizing that it was composed of a single mechanism that was repeated over and over again, and proposing the cortical column as the basic unit. The difference in the height of certain layers in different region noted above are simply differences in the amount of interconnectivity that the regions are responsible for dealing with. …

extensive experimentation has revealed that there are in fact repeating units within each column. It is my contention that the basic unit is a pattern organizer and that this constitute the fundamental component of the neocortex.

As I read, Kurzweil’s hierarchical models reminded me of Chomsky’s theories of language–both Ray and Noam are both associated with MIT and have probably conversed many times. Kurzweil does get around to discussing Chomsky’s theories and their relationship to his work:

Language is itself highly hierarchical and evolved to take advantage of the hierarchical nature of the neocortex, which in turn reflects the hierarchical nature of reality. The innate ability of human to lean the hierarchical structures in language that Noam Chomsky wrote about reflects the structure of of the neocortex. In a 2002 paper he co-authored, Chomsky cites the attribute of “recursion” as accounting for the unique language faculty of the human species. Recursion, according to Chomsky, is the ability to put together small parts into a larger chunk, and then use that chunk as a part in yet another structure, and to continue this process iteratively In this way we are able to build the elaborate structure of sentences and paragraphs from a limited set of words. Although Chomsky was not explicitly referring here to brain structure, the capability he is describing is exactly what the neocortex does. …

This sounds good to me, but I am under the impression that Chomsky’s linguistic theories are now considered outdated. Perhaps that is only his theory of universal grammar, though. Any linguistics experts care to weigh in?

According to Wikipedia:

Within the field of linguistics, McGilvray credits Chomsky with inaugurating the “cognitive revolution“.[175] McGilvray also credits him with establishing the field as a formal, natural science,[176] moving it away from the procedural form of structural linguistics that was dominant during the mid-20th century.[177] As such, some have called him “the father of modern linguistics”.[178][179][180][181]

The basis to Chomsky’s linguistic theory is rooted in biolinguistics, holding that the principles underlying the structure of language are biologically determined in the human mind and hence genetically transmitted.[182] He therefore argues that all humans share the same underlying linguistic structure, irrespective of sociocultural differences.[183] In adopting this position, Chomsky rejects the radical behaviorist psychology of B. F. Skinner which views the mind as a tabula rasa (“blank slate”) and thus treats language as learned behavior.[184] Accordingly, he argues that language is a unique evolutionary development of the human species and is unlike modes of communication used by any other animal species.[185][186] Chomsky’s nativist, internalist view of language is consistent with the philosophical school of “rationalism“, and is contrasted with the anti-nativist, externalist view of language, which is consistent with the philosophical school of “empiricism“.[187][174]

Anyway, back to Kuzweil, who has an interesting bit about love:

Science has recently gotten into the act as well, and we are now able to identify the biochemical changes that occur when someone falls in love. Dopamine is released, producing feelings of happiness and delight. Norepinephrin levels soar, which lead to a racing heart and overall feelings of exhilaration. These chemicals, along with phenylethylamine, produce elevation, high energy levels, focused attention, loss of appetite, and a general craving for the object of one’s desire. … serotonin level go down, similar to what happens in obsessive-compulsive disorder….

If these biochemical phenomena sound similar to those of the flight-or-fight syndrome, they are, except that we are running toward something or someone; indeed, a cynic might say toward rather than away form danger. The changes are also fully consistent with those of the early phase of addictive behavior. …  Studies of ecstatic religious experiences also show the same physical phenomena, it can be said that the person having such an experiences is falling in love with God or whatever spiritual connection on which they are focused. …

Religious readers care to weigh in?

Consider two related species of voles: the prairie vole and the montane vole. They are pretty much identical, except that the prairie vole has receptors for oxytocin and vasopressin, whereas the montane vole does not. The prairie vole is noted for lifetime monogamous relationships, while the montane vole resorts almost exclusively to one-night stands.

Learning by species:

A mother rat will build a nest for her young even if she has never seen another rat in her lifetime. Similarly, a spider will spin a web, a caterpillar will create her own cocoon, and a beaver will build a damn, even if no contemporary ever showed them how to accomplish these complex tasks. That is not to say that these are not learned behavior. It is just that he animals did not learn them in a single lifetime… The evolution of animal behavior does constitute a learning process, but it is learning by the species, not by the individual and the fruits of this leaning process are encoded in DNA.

I think that’s enough for today; what did you think? Did you enjoy the book? Is Kurzweil on the right track with his pattern recognizers? Are non-biological neocortexes on the horizon? Will we soon convert the solar system to computronium?

Let’s continue this discussion next Monday–so if you haven’t read the book yet, you still have a whole week to finish.

 

Book Club: The Code Economy, Chapter 11: Education and Death

Welcome back to EvX’s book club. Today we’re reading Chapter 11 of The Code Economy, Education.

…since the 1970s, the economically fortunate among us have been those who made the “go to college” choice. This group has seen its income row rapidly and its share of the aggregate wealth increase sharply. Those without a college education have watched their income stagnate and their share of the aggregate wealth decline. …

Middle-age white men without a college degree have been beset by sharply rising death rates–a phenomenon that contrasts starkly with middle-age Latino and African American men, and with trends in nearly every other country in the world.

It turns out that I have a lot of graphs on this subject. There’s a strong correlation between “white death” and “Trump support.”

White vs. non-white Americans

American whites vs. other first world nations

source

But “white men” doesn’t tell the complete story, as death rates for women have been increasing at about the same rate. The Great White Death seems to be as much a female phenomenon as a male one–men just started out with higher death rates in the first place.

Many of these are deaths of despair–suicide, directly or through simply giving up on living. Many involve drugs or alcohol. And many are due to diseases, like cancer and diabetes, that used to hit later in life.

We might at first think the change is just an artifact of more people going to college–perhaps there was always a sub-set of people who died young, but in the days before most people went to college, nothing distinguished them particularly from their peers. Today, with more people going to college, perhaps the destined-to-die are disproportionately concentrated among folks who didn’t make it to college. However, if this were true, we’d expect death rates to hold steady for whites overall–and they have not.

Whatever is affecting lower-class whites, it’s real.

Auerswald then discusses the “Permanent income hypothesis”, developed by Milton Friedman: Children and young adults devote their time to education, (even going into debt,) which allows us to get a better job in mid-life. When we get a job, we stop going to school and start saving for retirement. Then we retire.

The permanent income hypothesis is built into the very structure of our society, from Public Schools that serve students between the ages of 5 and 18, to Pell Grants for college students, to Social Security benefits that kick in at 65. The assumption, more or less, is that a one-time investment in education early in life will pay off for the rest of one’s life–a payout of such returns to scale that it is even sensible for students and parents to take out tremendous debt to pay for that education.

But this is dependent on that education actually paying off–and that is dependent on the skills people learn during their educations being in demand and sufficient for their jobs for the next 40 years.

The system falls apart if technology advances and thus job requirements change faster than once every 40 years. We are now looking at a world where people’s investments in education can be obsolete by the time they graduate, much less by the time they retire.

Right now, people are trying to make up for the decreasing returns to education (a highschool degree does not get you the same job today as it did in 1950) by investing more money and time into the single-use system–encouraging toddlers to go to school on the one end and poor students to take out more debt for college on the other.

This is probably a mistake, given the time-dependent nature of the problem.

The obvious solution is to change how we think of education and work. Instead of a single, one-time investment, education will have to continue after people begin working, probably in bursts. Companies will continually need to re-train workers in new technology and innovations. Education cannot be just a single investment, but a life-long process.

But that is hard to do if people are already in debt from all of the college they just paid for.

Auerswald then discusses some fascinating work by Bessen on how the industrial revolution affected incomes and production among textile workers:

… while a handloom weaver in 1800 required nearly forty minutes to weave a yard of coarse cloth using a single loom, a weaver in 1902 could do the same work operating eighteen Nothrop looms in less than a minute, on average. This striking point relates to the relative importance of the accumulation of capital to the advance of code: “Of the roughly thirty-nine-minute reduction in labor time per yard, capital accumulation due to the changing cost of capital relative to wages accounted for just 2 percent of the reduction; invention accounted for 73 percent of the reduction; and 25 percent of the time saving came from greater skill and effort of the weavers.” … “the role of capital accumulation was minimal, counter to the conventional wisdom.”

Then Auerswald proclaims:

What was the role of formal education in this process? Essentially nonexistent.

Boom.

New technologies are simply too new for anyone to learn about them in school. Flexible thinkers who learn fast (generalists) thus benefit from new technologies and are crucial for their early development. Once a technology matures, however, it becomes codified into platforms and standards that can be taught, at which point demand for generalists declines and demand for workers with educational training in the specific field rises.

For Bessen, formal education and basic research are not the keys to the development of economies that they are often represented a being. What drives the development of economies is learning by doing and the advance of code–processes that are driven at least as much by non-expert tinkering as by formal research and instruction.

Make sure to read the endnotes to this chapter; several of them are very interesting. For example, #3 begins:

“Typically, new technologies demand that a large number of variables be properly controlled. Henry Bessemer’s simple principle of refining molten iron with a blast of oxygen work properly only at the right temperatures, in the right size vessel, with the right sort of vessel refractory lining, the right volume and temperature of air, and the right ores…” Furthermore, the products of these factories were really one that, in the United States, previously had been created at home, not by craftsmen…

#8 states:

“Early-stage technologies–those with relatively little standardized knowledge–tend to be used at a smaller scale; activity is localized; personal training and direct knowledge sharing are important, and labor markets do not compensate workers for their new skills. Mature technologies–with greater standardized knowledge–operate at large scale and globally, market permitting; formalized training and knowledge are more common; and robust labor markets encourage workers to develop their own skills.” … The intensity of of interactions that occur in cities is also important in this phase: “During the early stages, when formalized instruction is limited, person-to-person exchange is especially important for spreading knowledge.”

This reminds me of a post on Bruce Charlton’s blog about “Head Girl Syndrome“:

The ideal Head Girl is an all-rounder: performs extremely well in all school subjects and has a very high Grade Point Average. She is excellent at sports, Captaining all the major teams. She is also pretty, popular, sociable and well-behaved.

The Head Girl will probably be a big success in life…

But the Head Girl is not, cannot be, a creative genius.

*

Modern society is run by Head Girls, of both sexes, hence there is no place for the creative genius.

Modern Colleges aim at recruiting Head Girls, so do universities, so does science, so do the arts, so does the mass media, so does the legal profession, so does medicine, so does the military…

And in doing so, they filter-out and exclude creative genius.

Creative geniuses invent new technologies; head girls oversee the implementation and running of code. Systems that run on code can run very smoothly and do many things well, but they are bad at handling creative geniuses, as many a genius will inform you of their public school experience.

How different stages in the adoption of new technology and its codification into platforms translates into wages over time is a subject I’d like to see more of.

Auerswald then turns to the perennial problem of what happens when not only do the jobs change, they entirely disappear due to increasing robotification:

Indeed, many of the frontier business models shaping the economy today are based on enabling a sharp reduction in the number of people required to perform existing tasks.

One possibility Auerswald envisions is a kind of return to the personalized markets of yesteryear, when before massive industrial giants like Walmart sprang up. Via internet-based platforms like Uber or AirBNB, individuals can connect directly with people who’d like to buy their goods or services.

Since services make up more than 84% of the US economy and an increasingly comparable percentage in coutnries elsewhere, this is a big deal.

It’s easy to imagine this future in which we are all like some sort of digital Amish, continually networked via our phones to engage in small transactions like sewing a pair of trousers for a neighbor, mowing a lawn, selling a few dozen tacos, or driving people to the airport during a few spare hours on a Friday afternoon. It’s also easy to imagine how Walmart might still have massive economies of scale over individuals and the whole system might fail miserably.

However, if we take the entrepreneurial perspective, such enterprises are intriguing. Uber and Airbnb work by essentially “unlocking” latent assets–time when people’s cars or homes were sitting around unused. Anyone who can find other, similar latent assets and figure out how to unlock them could become similarly successful.

I’ve got an underutilized asset: rural poor. People in cities are easy to hire and easy to direct toward educational opportunities. Kids growing up in rural areas are often out of the communications loop (the internet doesn’t work terribly well in many rural areas) and have to drive a long way to job interviews.

In general, it’s tough to network large rural areas in the same ways that cities get networked.

On the matter of why peer-to-peer networks have emerged in certain industries, Auerswald makes a claim that I feel compelled to contradict:

The peer-to-peer business models in local transportation, hospitality, food service, and the rental of consumer goods were the first to emerge, not because they are the most important for the economy but because these are industries with relatively low regulatory complexity.

No no no!

Food trucks emerged because heavy regulations on restaurants (eg, fire code, disability access, landscaping,) have cut significantly into profits for restaurants housed in actual buildings.

Uber emerged because the cost of a cab medallion–that is, a license to drive a cab–hit 1.3 MILLION DOLLARS in NYC. It’s a lucrative industry that people were being kept out of.

In contrast, there has been little peer-to-peer business innovation in healthcare, energy, and education–three industries that comprise more than a quarter of the US GDP–where regulatory complexity is relatively high.

Again, no.

There is a ton of competition in healthcare; just look up naturopaths and chiropractors. Sure, most of them are quacks, but they’re definitely out there, competing with regular doctors for patients. (Midwives appear to be actually pretty effective at what they do and significantly cheaper than standard ob-gyns.)

The difficulty with peer-to-peer healthcare isn’t regulation but knowledge and equipment. Most Americans own a car and know how to drive, and therefore can join Uber. Most Americans do not know how to do heart surgery and do not have the proper equipment to do it with. With training I might be able to set a bone, but I don’t own an x-ray machine. And you definitely don’t want me manufacturing my own medications. I’m not even good at making soup.

Education has tons of peer-to-peer innovation. I homeschool my children. Sometimes grandma and grandpa teach the children. Many homeschoolers join consortia that offer group classes, often taught by a knowledgeable parent or hired tutor. Even people who aren’t homeschooling their kids often hire tutors, through organizations like Wyzant or afterschool test-prep centers like Kumon. One of my acquaintances makes her living primarily by skype-tutoring Koreans in English.

And that’s not even counting private schools.

Yes, if you want to set up a formal “school,” you will encounter a lot of regulation. But if you just want to teach stuff, there’s nothing stopping you except your ability to find students who’ll pay you to learn it.

Now, energy is interesting. Here Auerswsald might be correct. I have trouble imagining people setting up their own hydroelectric dams without getting into trouble with the EPA (not to mention everyone downstream.)

But what if I set up my own windmill in my backyard? Can I connect it to the electric grid and sell energy to my neighbors on a windy day? A quick search brings up WindExchange, which says, very directly:

Owners of wind turbines interconnected directly to the transmission or distribution grid, or that produce more power than the host consumes, can sell wind power as well as other generation attributes.

So, maybe you can’t set up your own nuclear reactor, and maybe the EPA has a thing about not disturbing fish, but it looks like you can sell wind and solar energy back to the grid.

I find this a rather exciting thought.

Ultimately, while Auerswald does return to and address the need to radically change how we think about education and the education-job-retirement lifepath, he doesn’t return to the increasing white death rate. Why are white death rates increasing faster than other death rates, and will transition to the “gig economy” further accelerate this trend? Or was the past simply anomalous for having low white death rates, or could these death rates be driven by something independent of the economy itself?

Now, it’s getting late, so that’s enough for tonight, but what are your thoughts? How do you think this new economy–and educational landscape–will play out?

Book Club: The Code [Robot] Economy (pt. 2)

Welcome to EvX’s book club. Today we’re discussing Philip Auerswald’s The Code Economy, Introduction.

I’ve been discussing the robot economy for years (though not necessarily via the blog.) What happens when robots take over most of the productive jobs? Most humans were once involved in directly producing the necessities of human life–food, clothing, and shelter, but mostly food. Today, machines have eliminated most food and garment production jobs. One tractor easily plows many more acres in a day than a horse or mule team did in the 1800s, allowing one man to produce as much food as dozens (or hundreds) once did.

What happened to those ex-farmers? Most of us are employed in new professions that didn’t exist (eg, computer specialist) or barely existed (health care), but there are always those who can’t find employment–and unemployment isn’t evenly distributed.

Black unemployment rate

Since 1948, the overall employment rate has rarely exceeded 7.5%; the rate for whites has been slightly lower. By contrast, the black unemployment rate has rarely dipped below 10% (since 1972, the best data I have.) The black unemployment rate has only gone below 7.5 three times–for one month in 1999, one month in 2000, and since mid-2017. 6.6% in April, 2018 is the all-time low for black unemployment. (The white record, 3.0%, was set in the ’60s.)

(As Auerswald points out, “unemployment” was a virtually unknown concept in the Medieval economy, where social station automatically dictated most people’s jobs for life.)

Now I know the books are cooked and “unemployment” figures are kept artificially low by shunting many of the unemployed into the ranks of the officially “disabled,” who aren’t counted in the statistics, but no matter how you count the numbers, blacks struggle to find jobs at the same rates as whites–a problem they didn’t face in the pre-industrial, agricultural economy (though that economy caused suffering in its own way.)

A quick glance at measures of black and white educational attainment explains most of the employment gap–blacks graduate from school at lower rates, are less likely to earn a college degree, and overall have worse SAT/ACT scores. In an increasingly “post-industrial,” knowledge-based economy where most unskilled labor can be performed by robots, what happens to unskilled humans?

What happens when all of the McDonald’s employees have been replaced by robots and computers? When even the advice given by lawyers and accountants can be more cheaply delivered by an app on your smartphone? What if society, eventually, doesn’t need humans to perform most jobs?

Will most people simply be unemployed, ruled over by the robot-owning elite and the lucky few who program the robots? Will new forms of work we haven’t even begun to dream of emerge? Will we adopt some form of universal basic income, or descend into neo-feudalism? Will we have a permanent underclass of people with no hope of success in the current economy, either despairing at their inability to live successful lives or living slothfully off the efforts of others?

Here lies the crux of Auerswald’s thesis. He provides four possible arguments for how the “advance of code” (ie, the accumulation of technological knowledge and innovation,) could turn out for humans.

The Rifkin View:

  1. The power of code is growing at an exponential rate.
  2. Code nearly perfectly substitutes for human capabilities.
  3. Therefore the (relative) power of human capabilities is shrinking at an exponential rate.

If so, we should be deeply worried.

The Kurzweil View:

  1. The power of code is growing at an exponential rate.
  2. Code nearly perfectly complements human capabilities.
  3. Therefore the (absolute) power of human capabilities is growing at an exponential rate.

If so, we may look forward to the cyborg singularity

The Auerswald View:

  1. The power of code is growing at an exponential rate [at least we all agree on something.]
  2. Code only partially substitutes for human capabilities.
  3. Therefore the (relative) power of human capabilities is shrinking at an exponential rate in those categories of work that can be performed by computers, but not in others.

Auerswald notes:

In other words, where Kurzweil talks about an impeding code-induced Singularity, the reality looks much more like one code-induced bifurcation–the division of labor between humans and machines–after another.

The answer to the question, “Is there anything that humans can do better than digital computers?” turns out to be fairly simple: humans are better at being human.

Further:

1. Creating and improving code is a key part of what we human beings do. It’s how we invent the future by building on the past.

2. The evolution of the economy is driven by the advance of code. Understanding this advance is therefore fundamental to economics, and to much of human history.

3. When we create and advance code we don’t just invent new toys, we produce new forms of meaning, new experiences, and new ways of making our way in the world.

What do you think?

Book Club: The Code Economy pt 1

I don’t think the publishers got their money’s worth on cover design

Welcome to EvX’s Book Club. Today we begin our exciting tour of Philip E. Auerswald’s The Code Eoconomy: A Forty-Thousand-Year History. with the introduction, Technology = Recipes, and Chapter one, Jobs: Divide and Coordinate if we get that far.

I’m not sure exactly how to run a book club, so just grab some coffee and let’s dive right in.

First, let’s note that Auerswald doesn’t mean code in the narrow sense of “commands fed into a computer” but in a much broader sense of all encoded processes humans have come up with. His go-to example is the cooking recipe.

The Code Economy describes the evolution of human productive activity from simplicity to complexity over the span of more than 40,000 years. I call this evolutionary process the advance of code.

I find the cooking example a bit cutesy, but otherwise it gets the job done.

How… have we humans managed to get where we are today despite our abundant failings, including wars, famine, and a demonstrably meager capacity for society-wide planning and coordination? … by developing productive activities that evolve into regular routines and standardized platforms–which is to say that we have survived, and thrived, by creating and advancing code.

There’s so much in this book that almost every sentence bears discussion. First, as I’ve noted before, social organization appears to be a spontaneous emergent feature of every human group. Without even really meaning to, humans just naturally seem compelled organize themselves. One day you’re hanging out with your friends, riding motorcycles, living like an outlaw, and the next thing you know you’re using the formal legal system to sue a toy store for infringement of your intellectual property.

Alexander Wienberger, Holodomor

At the same time, our ability to organize society at the national level is completely lacking. As one of my professors once put it, “God must hate communists, because every time a country goes communist, an “act of god” occurs and everyone dies.”

It’s a mystery why God hates communists so much, but hate ’em He does. Massive-scale social engineering is a total fail and we’ll still be suffering the results for a long time.

This creates a kind of conflict, because people can look at the small-scale organizing they do, and they look at large-scale disorganization, and struggle to understand why the small stuff can’t simply be scaled up.

And yet… society still kind of works. I can go to the grocery store and be reasonably certain that by some magical process, fresh produce has made its way from fields in California to the shelf in front of me. By some magical process, I can wave a piece of plastic around and use it to exchange enough other, unseen goods to pay for my groceries. I can climb into a car I didn’t build and cruise down a network of streets and intersections, reasonably confident that everyone else driving their own two-ton behemoth at 60 miles an hour a few feet away from me has internalized the same rules necessary for not crashing into me. Most of the time. And I can go to the gas station and pour a miracle liquid into my car and the whole system works, whether or not I have any clue how all of the parts manage to come together and do so.

The result is a miracle. Modern society is a miracle. If you don’t believe me, try using an outhouse for a few months. Try carrying all of your drinking water by hand from the local stream and chopping down all of the wood you need to boil it to make it potable. Try fighting off parasites, smallpox, or malaria without medicine or vaccinations. For all my complaints (and I know I complain a lot,) I love civilization. I love not worrying about cholera, crop failure, or dying from cavities. I love air conditioning, refrigerators, and flush toilets. I love books and the internet and domesticated strawberries. All of these are things I didn’t create and can’t take credit for, but get to enjoy nonetheless. I have been blessed.

But at the same time, “civilization” isn’t equally distributed. Millions (billions?) of the world’s peoples don’t have toilets, electricity, refrigerators, or even a decent road from their village to the next.

GDP per capita by country

Auerswald is a passionate champion of code. His answer to unemployment problems is probably “learn to code,” but in such a broad, metaphorical way that encompasses so many human activities that we can probably forgive him for it. One thing he doesn’t examine is why code takes off in some places but not others. Why is civilization more complex in Hong Kong than in Somalia? Why does France boast more Fields Medalists than the DRC?

In our next book (Niall Ferguson’s The Great Degeneration,) we’ll discuss whether specific structures like legal and tax codes can affect how well societies grow and thrive (spoiler alert: they do, just see communism,) and of course you are already familiar with the Jared Diamond environmentalist theory that folks in some parts of the world just had better natural resources to work than in other parts (also true, at least in some cases. I’m not expecting some great industry to get up and running on its own in the arctic.)

IQ by country

But laying these concerns aside, there are obviously other broad factors at work. A map of GDP per capita looks an awful lot like a map of average IQs, with obvious caveats about the accidentally oil-rich Saudis and economically depressed ex-communists.

Auerswald believes that the past 40,000 years of code have not been disasters for the human race, but rather a cascade of successes, as each new invention and expansion to our repertoir of “recipes” or “codes” has enabled a whole host of new developments. For example, the development of copper tools didn’t just put flint knappers out of business, it also opened up whole new industries because you can make more varieties of tools out of copper than flint. Now we had copper miners, copper smelters (a  new profession), copper workers. Copper tools could be sharpened and, unlike stone, resharpened, making copper tools more durable. Artists made jewelry; spools of copper wires became trade goods, traveling long distances and stimulating the prehistoric “economy.” New code bequeaths complexity and even more code, not mass flint-knapper unemployment.

Likewise, the increase in reliable food supply created by farming didn’t create mass hunter-gatherer unemployment, but stimulated the growth of cities and differentiation of humans into even more professions, like weavers, cobblers, haberdashers, writers, wheelwrights, and mathematicians.

It’s a hopeful view, and I appreciate it in these anxious times.

But it’s very easy to say that the advent of copper or bronze or agriculture was a success because we are descended from the people who succeeded. We’re not descended from the hunter-gatherers who got displaced or wiped out by agriculturalists. In recent cases where hunter-gatherer or herding societies were brought into the agriculturalist fold, the process has been rather painful.

Elizabeth Marshall Thomas’s The Harmless People, about the Bushmen of the Kalahari, might overplay the romance and downplay the violence, but the epilogue’s description of how the arrival of “civilization” resulted in the deaths and degradation of the Bushmen brought tears to my eyes. First they died of dehydration because new fences erected to protect “private property” cut them off from the only water. No longer free to pursue the lives they had lived for centuries, they were moved onto what are essentially reservations and taught to farm and herd. Alcoholism and violence became rampant.

Among the book’s many characters was a man who had lost most of his leg to snakebite. He suffered terribly as his leg rotted away, cared for by his wife and family who brought him food. Eventually, with help, he healed and obtained a pair of crutches, learned to walk again, and resumed hunting: providing for his family.

And then in “civilization” he was murdered by one of his fellow Bushmen.

It’s a sad story and there are no easy answers. Bushman life is hard. Most people, when given the choice, seem to pick civilization. But usually we aren’t given a choice. The Bushmen weren’t. Neither were factory workers who saw their jobs automated and outsourced. Some Bushmen will adapt and thrive. Nelson Mandela was part Bushman, and he did quite well for himself. But many will suffer.

What to do about the suffering of those left behind–those who cannot cope with change, who do not have the mental or physical capacity to “learn to code” or otherwise adapt remains an unanswered question. Humanity might move on without them, ignoring their suffering because we find them undeserving of compassion–or we might get bogged down trying to save them all. Perhaps we can find a third route: sympathy for the unfortunate without encouraging obsolete behavior?

In The Great Degeneration, Ferguson wonders why the systems (“code”) that supports our society appears to be degenerating. I have a crude but answer: people are getting stupider. It takes a certain amount of intelligence to run a piece of code. Even a simple task like transcribing numbers is better performed by a smarter person than a dumber person, who is more likely to accidentally write down the wrong number. Human systems are built and executed by humans, and if the humans in them are less intelligent than the ones who made them, then they will do a bad job of running the systems.

Unfortunately for those of us over in civilization, dysgenics is a real thing:

Source: Audacious Epigone

Whether you blame IQ itself or the number of years smart people spend in school, dumb people have more kids (especially the parents of the Baby Boomers.) Epigone here only looks at white data (I believe Jayman has the black data and it’s just as bad, if not worse.)

Of course we can debate about the Flynn effect and all that, but I suspect there two competing things going on: First, a rising 50’s economic tide lifted all boats, making everyone healthier and thus smarter and better at taking IQ tests and making babies, and second, declining infant mortality since the late 1800s and possibly the Welfare state made it easier for the children of the poorest and least capable parents to survive.

The effects of these two trends probably cancel out at first, but after a while you run out of Flynn effect (maybe) and then the other starts to show up. Eventually you get Greece: once the shining light of Civilization, now defaulting on its loans.

Well, we have made it a page in!

Termite City

What do you think of the book? Have you finished it yet? What do you think of the way Auersbach conceptualizes of “code” and its basis as the building block of pretty much all human activity? Do you think Auersbach is essentially correct to be hopeful about our increasingly code-driven future, or should we beware of the tradeoffs to individual autonomy and freedom inherent in becoming a glorified colony of ants?

Cathedral Round-Up: Give Zuck a Chance?

Yale’s commencement speech was delivered this year by Epstein, a major-league baseball guy with a story about teamwork and winning the World Series.

MIT’s commencement speech was delivered by Matt Damon, no wait that was last year, this year they’re going to have Tim Cook, CEO of Apple.

Wellesley, of course, had Hillary Clinton. (Warning: link goes to Cosmopolitan.)

And Harvard’s commencement speech was delivered by Mark Zuckerberg, who discussed his presidential bid:

You’re graduating at a time when this is especially important. When our parents graduated, purpose reliably came from your job, your church, your community. But today, technology and automation are eliminating many jobs. Membership in communities is declining. Many people feel disconnected and depressed, and are trying to fill a void.

As I’ve traveled around, I’ve sat with children in juvenile detention and opioid addicts, who told me their lives could have turned out differently if they just had something to do, an after school program or somewhere to go.

Just a second. Do you know what I did after school to keep myself busy and out of juvie?

Homework.

Today I want to talk about three ways to create a world where everyone has a sense of purpose: by taking on big meaningful projects together, by redefining equality so everyone has the freedom to pursue purpose, and by building community across the world.

First, let’s take on big meaningful projects.

Our generation will have to deal with tens of millions of jobs replaced by automation like self-driving cars and trucks. But we have the potential to do so much more together.

Every generation has its defining works. More than 300,000 people worked to put a man on the moon – including that janitor. Millions of volunteers immunized children around the world against polio. Millions of more people built the Hoover dam and other great projects.

These projects didn’t just provide purpose for the people doing those jobs, they gave our whole country a sense of pride that we could do great things. …

So what are we waiting for? It’s time for our generation-defining public works. How about stopping climate change before we destroy the planet and getting millions of people involved manufacturing and installing solar panels? How about curing all diseases and asking volunteers to track their health data and share their genomes? Today we spend 50x more treating people who are sick than we spend finding cures so people don’t get sick in the first place. That makes no sense. We can fix this. How about modernizing democracy so everyone can vote online, and personalizing education so everyone can learn?

Oh, Zuck. You poor, naive man.

I’m not going to run through the pros and cons of solar panels because I don’t know the subject well enough. Maybe that’s a good idea.

I’d love to cure all diseases. Sure, nature would invent new ones, but it’d still be great. But what’s actually driving medical costs? Diseases we have no cures for, like ALS, Alzheimer’s, or the common cold? Or preventable things like overeating=>obesity=>heart disease? Or is it just a nasty mishmash of regulation, insurance, and greedy pharmaceutical companies?

According to What is Driving US Healthcare Costs:

Half of all adults in the U.S. have at least one chronic disease, such as heart disease, cancer, and [type 2] diabetes. Twenty-five percent of adults in the U.S. have two or more chronic diseases. An aging population, lifestyle choices (like exercise and nutrition), and genetics contribute to the growing prevalence of chronic illnesses.

Chronic diseases contribute to rising healthcare costs because they are expensive to treat. Eighty-six percent of all healthcare spending is for patients with a chronic disease. Patients with three or more chronic diseases are likely to fall into the most expensive one percent of patients, accounting for 20 percent of healthcare expenditures. Many of these patients require high spending in every cost category – physician visits, hospital stays, prescription drugs, medical equipment use, and health insurance.

These are the diseases of Western Civilization, and they’re caused by sitting on your butt eating blog posts and eating Doritos all day instead of chasing down your dinner and killing it with your bare hands like a mighty caveman. Rar.

Luckily for us, unlike ALS, we know what causes them and how to prevent them. Unluckily for us, Doritos are really tasty.

But this also means that until we find some way to outlaw Doritos (or society collapses,) we’re going to keep spending more money treating Type-2 Diabetes and heart disease than on “curing” them.

I don’t see how “modernizing democracy” is going to put millions of people whose jobs have been automated back to work, though it might employ a few people to make websites.

As for education, you’d think Zuckerberg would have learned after throwing 100 MILLION DOLLARS at the Newark public schools and getting ZILCH–ZERO–NADA student improvement in return, but I guess not.

There’s this myth that students have “individual learning styles” and that if you could just figure out each student’s own special style and tailor the curriculum directly to them, they’d suddenly start learning.

In reality, this notion is idiotic. Learning is fundamental to our species; our brains do it automatically, all the time. Imagine a caveman who could only learn the location of a dangerous lion via pictograms sketched by other cavemen, rather than from someone shouting “Lion! Run!” Our brains are flexible and in the vast majority of cases will take in new information by whatever means they can.

But getting back to Zuck:

These achievements are within our reach. Let’s do them all in a way that gives everyone in our society a role. Let’s do big things, not only to create progress, but to create purpose.

So taking on big meaningful projects is the first thing we can do to create a world where everyone has a sense of purpose.

Overall, I think Zuckerberg has identified an important problem: the robot economy is replacing human workers, leaving people without a sense of purpose in their lives (or jobs.) Some of his proposed solutions, like “employ people in solar panel industry,” might work, but others, like “vote online,” miss the mark completely.

Unfortunately, this is a really hard problem to solve. (Potential solutions: Universal Basic Income so we don’t all starve to death when the robots automate everything, or just let 90% of the population starve to death because they’ve become economically irrelevant. Chose your future wisely.)

Back to Zuck:

The second is redefining equality to give everyone the freedom they need to pursue purpose.

Many of our parents had stable jobs throughout their careers. Now we’re all entrepreneurial, whether we’re starting projects or finding or role. And that’s great. Our culture of entrepreneurship is how we create so much progress.

Now, an entrepreneurial culture thrives when it’s easy to try lots of new ideas. Facebook wasn’t the first thing I built. I also built games, chat systems, study tools and music players. I’m not alone. JK Rowling got rejected 12 times before publishing Harry Potter. Even Beyonce had to make hundreds of songs to get Halo. The greatest successes come from having the freedom to fail.

12? Is that it? I’ve got about a hundred rejections.

Actually, those 12 were from publishers after J.K. Rowling landed an agent, so that doesn’t tell you the full number of rejections she received trying to get that agent. 12 rejections from publishers sounds pretty par for the course–if not better than average. Publishing is an incredibly difficult world for new authors to break into.

But today, we have a level of wealth inequality that hurts everyone. When you don’t have the freedom to take your idea and turn it into a historic enterprise, we all lose. Right now our society is way over-indexed on rewarding success and we don’t do nearly enough to make it easy for everyone to take lots of shots.

Let’s face it. There is something wrong with our system when I can leave here and make billions of dollars in 10 years while millions of students can’t afford to pay off their loans, let alone start a business.

Look, I know a lot of entrepreneurs, and I don’t know a single person who gave up on starting a business because they might not make enough money. But I know lots of people who haven’t pursued dreams because they didn’t have a cushion to fall back on if they failed.

We all know we don’t succeed just by having a good idea or working hard. We succeed by being lucky too. If I had to support my family growing up instead of having time to code, if I didn’t know I’d be fine if Facebook didn’t work out, I wouldn’t be standing here today. If we’re honest, we all know how much luck we’ve had.

Every generation expands its definition of equality. Previous generations fought for the vote and civil rights. They had the New Deal and Great Society. Now it’s our time to define a new social contract for our generation.

We should have a society that measures progress not just by economic metrics like GDP, but by how many of us have a role we find meaningful. We should explore ideas like universal basic income to give everyone a cushion to try new things.

Called it. Is Zuck going to run on the full communism ticket?

We’re going to change jobs many times, so we need affordable childcare to get to work and healthcare that aren’t tied to one company. We’re all going to make mistakes, so we need a society that focuses less on locking us up or stigmatizing us. And as technology keeps changing, we need to focus more on continuous education throughout our lives.

Or… maybe we could work on making employment more stable and using UBI to let parents take care of their own children instead of treating them like consumer goods to be produced by the cheapest possible workers?

I’m kind of biased here because I went to daycare as a kid and hated it.

He’s correct on healthcare, though. It definitely shouldn’t be tied to employers.

Now, while I do think that we should actually take a good, hard look at our criminal justice system to make sure we aren’t locking up innocent people or giving completely unjust sentences, society doesn’t normally lock people up for “mistakes.” It locks them up for things like murder. Like the Newark schools, I fear this is an area where Zuckerberg really doesn’t understand what the actual problem is.

And yes, giving everyone the freedom to pursue purpose isn’t free. People like me should pay for it. Many of you will do well and you should too.

That’s why Priscilla and I started the Chan Zuckerberg Initiative and committed our wealth to promoting equal opportunity. These are the values of our generation. It was never a question of if we were going to do this. The only question was when.

According to Wikipedia:

The Chan Zuckerberg Initiative (CZI) is a limited liability company founded by Facebook founder Mark Zuckerberg and Priscilla Chan with an investment of “up to $1 billion in [Facebook] shares in each of the next three years”.[2][3][4] Its creation was announced on December 1, 2015, for the birth of their daughter, Maxima Chan Zuckerberg.[2]

The aim of the Chan Zuckerberg Initiative is to “advance human potential and promote equality in areas such as health, education, scientific research and energy”.[2]

Priscilla Chan’s Wikipedia page states, “On December 1, 2015, Chan and Zuckerberg posted an open Facebook letter to their newborn daughter. They pledged to donate 99% of their Facebook shares, then valued at $45 billion, to the Chan Zuckerberg Initiative, which is their new charitable foundation that focuses on health and education.[3][12]

If I were there kid, I might be kind of pissed about my parents celebrating my birthday by giving away my inheritance.

Note, however, that:

The Chan Zuckerberg Initiative is not a charitable trust or a private foundation but a limited liability company which can be for-profit,[15][16] spend money on lobbying,[15][17] make political donations,[15][17][18] will not have to disclose its pay to its top five executives[17] and have fewer other transparency requirements, compared to a charitable trust.[15][16][17][18] Under this legal structure, as Forbes wrote it, “Zuckerberg will still control the Facebook shares owned by the Chan Zuckerberg Initiative”.[17][18]

So maybe this whole “charity” thing is just window-dressing. BTW, one of CZI’s projects is Andela:

Andela is a global engineering organization that extends engineering teams with world-class software developers. The company recruits the most talented developers on the African continent, shapes them into technical leaders, and places them as full-time distributed team members with companies that range from Microsoft and IBM to dozens of high-growth startups. Backed by Chan Zuckerberg Initiative, GV (Google Ventures) and Spark Capital, Andela is building the next generation of global technology leaders. Andela has offices in Lagos, Nairobi, Kampala and New York.

So Zuck’s going to solve the problem of people dying of hopelessness after losing their jobs to automation by training and importing third worlders to replace more jobs. (Meanwhile, they’re skimming the most talented people out of the third world, leaving countries there with even less human capital.)

But back to the speech:

Millennials are already one of the most charitable generations in history. In one year, three of four US millennials made a donation and seven out of ten raised money for charity.

I’m going to call bullshit on this, mostly because charitable giving correlates with age, not moving around too much, and most importantly, religiosity. The nation’s most charitable state is Utah, followed closely by the Southern states. The least charitable states are in New England, which is highly atheist, and whose lower classes are notably clannish:

“It Was Like a War Zone”: Busing in Boston:

Southie was ground zero for anti-busing rage. Hundreds of white demonstrators — children and their parents — pelted a caravan of 20 school buses carrying students from nearly all-black Roxbury to all-white South Boston. The police wore riot gear.

“I remember riding the buses to protect the kids going up to South Boston High School,” Jean McGuire, who was a bus safety monitor, recalled recently. “And the bricks through the window. …

From the start of busing, police at South Boston High outnumbered students. Yet the violence continued. Then-Mayor Kevin White, making a rare TV appeal, declared a curfew and banned crowds near the school, but said there was only so much he could do to protect students and enforce the federal mandate. …

Law enforcement tactics toughened, and what had started out as an anti-busing problem soon included anti-police sentiment. Many of the police officers were Irish from Southie.

“I had never seen that kind of anger in my life. It was so ugly,” said patrolman Francis Mickey Roache (South Boston High Class of 1954), who was on duty at the school that first day of desegregation, when protesters turned on him.

“These are women, and people who were probably my mother’s age, and they were just screaming, ‘Mickey, you gotta quit, you gotta quit!’ They picked me out because they knew me. I was a South Boston boy, I grew up in Southie,” he remembered. …

A group of whites in South Boston brutally beat a Haitian resident of Roxbury who had driven into their neighborhood. A month later some black students stabbed a white student at South Boston High. The school was shut down for a month.

Then-Gov. Francis Sargent put the National Guard on alert. State police were called in and would remain on duty on the streets of South Boston for the next three years.

Maybe they should have sent the black kids to Zuckerberg’s school instead of the Irish schools.

Millenials do give to charity on the internet, however, as entrepreneur.com notes:

Millennials frequently get berated for supposedly being selfish and not generous. Despite being the largest U.S. demographic by age, the generation of 18-to-34 year-olds donates less and volunteers less for charitable causes than any other age group.

But maybe it depends where you’re looking.

Millennials are the driving force behind a movement that is rapidly disrupting the $241 billion market in the U.S. alone for charitable giving. Crowdfunding is no longer just for indie film projects and iPhone accessories. The segment for personal appeals such as medical expenses, memorials, adoptions and disaster relief is soaring–an estimated $3 billion in 2014, according to research firm Massolution.

Just for the record, I detest the term “Millenials.” But let’s get back to Zuck:

Purpose doesn’t only come from work. The third way we can create a sense of purpose for everyone is by building community. And when our generation says “everyone”, we mean everyone in the world.

OKAY FULL COMMUNISM.

Quick show of hands: how many of you are from another country? Now, how many of you are friends with one of these folks? Now we’re talking. We have grown up connected.

In a survey asking millennials around the world what defines our identity, the most popular answer wasn’t nationality, religion or ethnicity, it was “citizen of the world”. That’s a big deal.

Here’s a little challenge. Why don’t you go live in China, and when they ask to see your passport, just loudly proclaim that you’re a “citizen of the world” and therefore don’t need a visa to be there? (No, not as Zuckerberg, the man with 63 billion dollars, but as a just a common millenial.)

Then move to Afghanistan and let the local warlords know that you’re a “citizen of the world” and going to live in their village, now, and would they please respect your religious and gender identities?

Try moving to Japan, North Korea, Bhutan, Iran, Saudi Arabia, Nigeria, Mexico, or any other nation, buying land, voting in local elections (if they have them,) and hanging out with your new neighbors.

Let me know how that works out.

Every generation expands the circle of people we consider “one of us”. For us, it now encompasses the entire world.

Zuck, have you even asked the people of Nigeria if they consider you “one of them”? You don’t speak their language. You don’t share their values (otherwise you’d have a lot more children.) You probably haven’t even spent a day of your life hanging out with your Nigerian friend in a poor neighborhood in Lagos.

I understand the naivety of a well-meaning young person who just wants to be friends with everyone, but adults understand that not everyone wants to be friends with them. Just because you like the pleasant idea of having a few friends from other countries does not mean that you are actually part of those cultures, nor that the people from those places actually want to you there.

We understand the great arc of human history bends towards people coming together in ever greater numbers — from tribes to cities to nations — to achieve things we couldn’t on our own.

How did that work out when the German city states united into one country?

We get that our greatest opportunities are now global — we can be the generation that ends poverty, that ends disease. We get that our greatest challenges need global responses too — no country can fight climate change alone or prevent pandemics. Progress now requires coming together not just as cities or nations, but also as a global community.

But we live in an unstable time. There are people left behind by globalization across the world. It’s hard to care about people in other places if we don’t feel good about our lives here at home. There’s pressure to turn inwards.

Related:

This is the struggle of our time. The forces of freedom, openness and global community against the forces of authoritarianism, isolationism and nationalism. Forces for the flow of knowledge, trade and immigration against those who would slow them down.

Because we all know that Japan, one of the few nations that is actually dealing reasonably well with robotification by not adding more laborers to a shrinking market, is horribly un-free.

Trump supporter beaten bloody by “ideas”

This is not a battle of nations, it’s a battle of ideas. There are people in every country for global connection and good people against it.

This isn’t going to be decided at the UN either. It’s going to happen at the local level, when enough of us feel a sense of purpose and stability in our own lives that we can open up and start caring about everyone. The best way to do that is to start building local communities right now.

We all get meaning from our communities. Whether our communities are houses or sports teams, churches or music groups, they give us that sense we are part of something bigger, that we are not alone; they give us the strength to expand our horizons.

That’s why it’s so striking that for decades, membership in all kinds of groups has declined as much as one-quarter. That’s a lot of people who now need to find purpose somewhere else.

But I know we can rebuild our communities and start new ones because many of you already are.

Interestingly, Zuckerberg is citing data from Robert Putnam’s Bowling Alone: The Collapse and Revival of American Community. From the Amazon blurb:

Drawing on vast new data that reveal Americans’ changing behavior, Putnam shows how we have become increasingly disconnected from one another and how social structures—whether they be PTA, church, or political parties—have disintegrated. Until the publication of this groundbreaking work, no one had so deftly diagnosed the harm that these broken bonds have wreaked on our physical and civic health, nor had anyone exalted their fundamental power in creating a society that is happy, healthy, and safe.

Bowling Alone attributes these changes to a variety of causes, including TV and declining religiosity, but Putnam’s Wikipedia page notes:

In recent years, Putnam has been engaged in a comprehensive study of the relationship between trust within communities and their ethnic diversity. His conclusion based on over 40 cases and 30,000 people within the United States is that, other things being equal, more diversity in a community is associated with less trust both between and within ethnic groups. Although limited to American data, it puts into question both the contact hypothesis and conflict theory in inter-ethnic relations. According to conflict theory, distrust between the ethnic groups will rise with diversity, but not within a group. In contrast, contact theory proposes that distrust will decline as members of different ethnic groups get to know and interact with each other. Putnam describes people of all races, sex, socioeconomic statuses, and ages as “hunkering down,” avoiding engagement with their local community—both among different ethnic groups and within their own ethnic group. Even when controlling for income inequality and crime rates, two factors which conflict theory states should be the prime causal factors in declining inter-ethnic group trust, more diversity is still associated with less communal trust.

Lowered trust in areas with high diversity is also associated with:

  • Lower confidence in local government, local leaders and the local news media.
  • Lower political efficacy – that is, confidence in one’s own influence.
  • Lower frequency of registering to vote, but more interest and knowledge about politics and more participation in protest marches and social reform groups.
  • Higher political advocacy, but lower expectations that it will bring about a desirable result.
  • Less expectation that others will cooperate to solve dilemmas of collective action (e.g., voluntary conservation to ease a water or energy shortage).
  • Less likelihood of working on a community project.
  • Less likelihood of giving to charity or volunteering.
  • Fewer close friends and confidants.
  • Less happiness and lower perceived quality of life.
  • More time spent watching television and more agreement that “television is my most important form of entertainment”.
Alexander Wienberger, Holodomor

Perhaps it is a sign of how far our communities have degenerated that today’s young adults imagine themselves to be as connected to people in China and Nigeria as with their own neighbors.

Zuckerberg’s not dumb, but I suspect he has spent his entire life ensconced in a very expensive cocoon filled with people who are basically like him, from his highschool, Phillips Exeter, to Harvard and Silicon Valley. Strip him of his 63 billion dollars and send him to a normal school, and Zuck’s just another unattractive dweeb whom women wouldn’t date and jocks would shove into lockers.

Communism starts with well-meaning idiots who want to help everyone and ends with gulags and mass graves.

 

That said, I think it’d be interesting to give Zuckerberg a chance to put his ideas into practice. Why not take his 63 billion and buy his own island, sign a semi-autonomy deal with whatever country’s jurisdiction it’s under, (probably in exchange for taxes,) and set up Zucktopia? He can let in whomever he wants–Africa’s top coders, Syrian refugees, Chinese gameshow hosts–start his own scientific and medical research institutions, and try to build a functional society from the ground up. If any of his ideas are terrible, he’ll probably figure that out quite quickly. If they’re good, he can turn his island into a purpose-driven economic powerhouse.

I don’t think Zuck has a good shot at the presidency just because he’s dorky and Americans hate dorks, but I didn’t predict Trump’s victory, either.

Entropy, Life, and Welfare (pt. 3/3)

Communism works so well, soldiers had to push Fidel Castro's hearse because the Cuban government couldn't find a working truck
Communism works so well, soldiers had to push Fidel Castro’s hearse because the Cuban government couldn’t find a working truck

This is Part Three of a series on how incentives affect the distribution of energy/resources throughout a society and the destructive effects of social systems like communism. (Part One and Part Two are here)

But before we criticize these programs too much, let’s understand where they came from:

The Industrial Revolution, which began around 1760 in Britain, created mass economic and social dislocation as millions of workers were forced off their farms and flooded into the cities.

Communism: the world's single biggest source of murder in the 20th century
Communism: the world’s single biggest source of murder in the 20th century

The booms and busts of the unregulated (and regulated) industrial economy caused sudden, unpredictable unemployment and, without a social safety net of some kind, starvation. This suffering unleashed Marxism, which soon transformed into an anti-capitalist, anti-Western ideology and tore across the planet, demolishing regimes and killing millions of people.

Reason.com attributes 94 million deaths to communism. The Black Book of Communism places the total between 85 and 100 million people. Historian on the Warpath totals almost 150 million people killed or murdered by communist governments, not including war deaths. (Wikipedia estimates that WWII killed, between battle deaths in Europe and the Pacific, disease, starvation, and genocide, 50-80 million people–and there were communists involved in WWII, also.)

cdchqnbumaaj5sjThe US and Europe, while not explicitly communist, have adopted many of socialism’s suggestions: Social Security, Welfare, Medicaid, etc., many in direct response to the Great Depression.

These solutions are, at best, a stop-gap measures to deal with the massive changes new technologies are still causing. Remember, humans were hunter-gatherers for 190,000 years. We had a long time to get used to being hunter gatherers. 10,000 years ago, a few of us started farming, and developed whole new cultures. A mere 200 years ago, the Industrial Revolution began spreading through Europe. Today, the “post industrial information economy” (or “robot economy,” as I call it,) is upon us, and we have barely even begun to adapt.

We are in an age that is–out of our 200,000 years of existence–entirely novel and the speed of change is increasing. We have not yet figured out how to cope, how to structure society for the long-term so that we don’t accidentally break it.

We have gotten very good, however, at creative accounting to make it look like we are producing more than we are.

ceumwjhviaiaquiBy the mid-1950s, the Industrial Revolution had brought levels of prosperity never before seen in human history to the US (and soon to Europe, Japan, Korea, etc.) But since the ’70s, things seem to have gone off-track.

People fault outsourcing and trade for the death of the great American job market, but technical progress and automation also deserve much of the blame. As the Daily Caller reports:

McDonald’s has announced plans to roll out automated kiosks and mobile pay options at all of its U.S. locations, raising questions about the future of its 1.5 million employees in the country and around the globe.

Roughly 500 restaurants in Florida, New York and California now have the automated ordering stations, and restaurants in Chicago, Boston, San Francisco, Seattle and Washington, D.C., will be outfitted in 2017, according to CNNMoney.

The locations that are seeing the first automated kiosks closely correlate with the fight for a $15 minimum wage. Gov. Andrew Cuomo signed into law a new $15 minimum wage for New York state in 2016, and the University of California has proposed to pay its low-wage employees $15.

There is an obvious trade-off between robots and employees: where wages are low enough, there is little incentive to invest capital in developing and purchasing robots. Where wages are high, there is more incentive to build robots.

labor_forceThe Robot Economy will continue to replace low-skilled, low-wage jobs blue collar workers and young people used to do. No longer will teenagers get summer jobs at McDonald’s. Many if not most of these workers are simply extraneous in the modern economy and cannot be “retrained” to do more information-dependent work. The expansion of the Welfare State, education (also paid for with tax dollars,) and make-work administrative positions can keep these displaced workers fed and maybe even “employed” for the foreseeable future, but they are not a long-term solution, and it is obvious that people in such degraded positions, unable to work, often lose the will to keep going.

picture-30But people do not appreciate the recommendation that they should just fuck off and die already. That’s how you get communist revolutions in the first place.

Mass immigration of unskilled labor into a market already shrinking due to automation / technological progress is a terrible idea. This is Basic Econ 101: Supply and Demand. If the supply of labor keeps increasing while the demand for labor keeps decreasing, the cost of labor (wages) will plummet. Likewise, corporations quite explicitly state that they want immigrants–including illegal ones–because they can pay them less.

In an economy with more demand than supply for labor, labor can organize (unions) and advocate in behalf of its common interests, demanding a higher share of profits, health insurance, pensions, cigarette breaks, etc. When the supply of labor outstrips demand, labor cannot advocate on its own behalf, because any uppity worker can simply be replaced by some desperate, unemployed person willing to work for less and not make a fuss.

gr2009012802033Note two professions in the US that are essentially protected by union-like organizations: doctors and lawyers. Both professions require years of expensive training at exclusive schools and high scores on difficult tests. Lawyers must also be members of their local Bar Associations, and doctors must endure residency. These requirements keep out the majority of people who would like to join these professions, and ensure high salaries for most who do.

While Residency sounds abjectly awful, the situation for doctors in Britain and Ireland sounds much worse. Slate Star Codex goes into great detail about the problems:

Many of the junior doctors I worked with in Ireland were working a hundred hours a week. It’s hard to describe what working 100 hours a week is like. Saying “it means you work from 7 AM to 9 PM every day including weekends” doesn’t really cut it. Imagine the hobbies you enjoy and the people you love. Now imagine you can’t spend time on any of them, because you are being yelled at as people die all around you for fourteen hours a day, and when you get home you have just enough time to eat dinner, brush your teeth, possibly pay a bill or two, and curl up in a ball before you have to go do it all again, and your next day off is in two weeks.

And this is the best case scenario, where everything is spaced out nice and even. The junior doctors I knew frequently worked thirty-six hour shifts at a time (the European Court of Human Rights has since declined to fine Ireland for this illegal practice). …

The psychological consequences are predictable: after one year, 55% of junior doctors describe themselves as burned out, 30% meet criteria for moderate depression, and 12% report considering suicide.

A lot of American junior doctors are able to bear this by reminding themselves that it’s only temporary. The worst part, internship, is only one year; junior doctorness as a whole only lasts three or four. After that you become a full doctor and a free agent – probably still pretty stressed, but at least making a lot of money and enjoying a modicum of control over your life.

In Britain, this consolation is denied most junior doctors. Everyone works for the government, and the government has a strict hierarchy of ranks, only the top of which – “consultant” – has anything like the freedom and salary that most American doctors enjoy. It can take ten to twenty years for junior doctors in Britain to become consultants, and some never do.

I don’t know about you, but I really don’t want my doctor to be suicidal.

Now, you may notice that Scott doesn’t live in Ireland anymore, and similarly, many British doctors to take their credentials and move abroad as quickly as possible. The British medical system would be forced to reform if not for the influx of foreign doctors willing to put up with hell in exchange for not living in the third world.

From the outside, many of these systems, from underfunded pensions to British medicine, look just fine. Indeed, an underfunded pension will operate just fine until the day it runs out of money. Until that day, everyone who clams the pension is in deep trouble looks like Chicken Little, running around claiming that the sky is falling.

There’s a saying in finance: The market can stay irrational longer than you can stay solvent.

BTW, the entire state of California is in deep trouble, from budget problems to insane property tax laws. They already consume far more water than they receive, (and are set for massive forest fires,) but vote for increased population via immigration with Mexico. California’s economy is being propped up by–among other things–masses of cash flowing into Silicon Valley. This is Dot.Com Bubble 2.0, and like the first, it will pop–the only question is when. As Reuters reported last February:

LinkedIn Corp’s (LNKD.N) shares closed down 43.6 percent on Friday, wiping out nearly $11 billion of market value, after the social network for professionals shocked Wall Street with a revenue forecast that fell far short of expectations. …

As of Thursday, LinkedIn shares were trading at 50 times forward 12-month earnings, making it one of the most expensive stocks in the tech sector.

Twitter Inc (TWTR.N) trades at 29.5 times forward earnings, Facebook Inc (FB.O) at 33.8 times and Alphabet Inc (GOOGL.O) at 20.9 times.

Even after the selloff, LinkedIn’s shares may still be overvalued, according to Thomson Reuters StarMine data.

LinkedIn should be trading at $71.79, a 30 percent discount to the stock’s Friday’s low, according to StarMine’s Intrinsic Valuation model, which takes analysts’ five-year estimates and models the growth trajectory over a longer period.

Linked in has since been bought out by Microsoft for $26 billion. As Fortune notes, this is absolutely insane, as there is no way Microsoft can make back that much money off of LinkedIn:

Source Fortune http://fortune.com/2016/06/13/microsoft-linkedin-overpaid/
Source Fortune

“Ebitda” stands for Earnings Before Interest, Tax, Depreciation and Amortisation. There is absolutely no way that LinkedIn, a social network that barely turns a profit, is worth more than Sun, EMC, Compaq, and Time Warner.

Shares normally trade around 20x a company’s previous year’s earnings, though right now the S & P’s P/E ratio is around 25. In 2016, LinkedIn’s P/E ratio has been around 180. (Even crazier, their ratio in 2015 was -1,220, because they lost money.)

Ever wonder where all of that money from QE is going? It’s turning into Ferraris cruising around San Francisco, and LinkedIn is not the only offender.

But these companies will not maintain fantasy valuations forever.

(While we’re at it: Why the AOL-Time Warner Merger Went so Wrong:

When the deal was announced on Jan. 10, 2000, Stephen M. Case, a co-founder of AOL, said, “This is a historic moment in which new media has truly come of age.” His counterpart at Time Warner, the philosopher chief executive Gerald M. Levin, who was fond of quoting the Bible and Camus, said the Internet had begun to “create unprecedented and instantaneous access to every form of media and to unleash immense possibilities for economic growth, human understanding and creative expression.”

The trail of despair in subsequent years included countless job losses, the decimation of retirement accounts, investigations by the Securities and Exchange Commission and the Justice Department, and countless executive upheavals. Today, the combined values of the companies, which have been separated, is about one-seventh of their worth on the day of the merger.)

So, that was a bit of a long diversion into the sheer artificiality of much of our economy, and how sooner or later, the Piper must be paid.

picture-5bWhen I try to talk to liberal friends about the problems of increasing automation and immigration on the incomes of the American working class, their response is that “We just need more regulation.”

In this cheerful fantasy, we can help my friend who cannot afford health insurance by requiring his employer to provide health insurance–when in reality, my friend now cannot find a job that lasts for more than a month because employers just fire him before the health insurance requirement kicks in. In fantasy land, you can protect poor people by making it harder for landlords to evict them, but in the real world, this makes it even harder for the poorest to get long-term housing because no landlord wants to take the chance of getting stuck with them. In fantasy land, immigration doesn’t hurt wages because you can just legislate a higher minimum wage, but the idea that you can legislate a wage that the market does not support is an absurdity worthy only of the USSR. In the real world, your job gets replaced with a robot.

picture-4bThis is not to say that we can’t have some form of welfare or social safety net to deal with the dislocations and difficulties of our new economy. Indeed, some form of social welfare may, in the long run, make the economic system more robust by allowing people to change jobs or weather temporary unemployment without dying. Nor does it mean that any inefficiency is going to break the system. But long-term, using legislation to create a problem and then using more legislation to prevent the market from correcting it increases inefficiency, and you are now spending resources to enforce both laws.

Just like Enron’s “creative accounting,” you cannot keep hiding losses indefinitely.

You can have open-borders capitalism with minimal welfare, in which the most skilled thrive and survive and the least skilled die out. This is more-or-less the system in Singapore (see here for a discussion of how they use personal savings accounts instead of most welfare; a discussion of poverty in Singapore; and Singapore’s migration policies.)

Or you can have a Japanese or Swedish-style welfare state, but no open borders, (because the system will collapse if you let in just anyone who wants free money [hint: everyone.])

But you cannot just smash two different systems together, heap more laws on top of them to try to prevent the market from responding, and expect it to carry on indefinitely producing the same levels of wealth and well-being as it always has.

The laws of thermodynamics are against you.

(Return to Part One and Part Two.)