Book Club: How to Create a Mind, pt 2/2

Ray Kurzweil, writer, inventor, thinker

Welcome back to EvX’s Book Club. Today  are finishing Ray Kurzweil’s How to Create a Mind: The Secret of Human thought Revealed.

Spiders are interesting, but Kurzweil’s focus is computers, like Watson, which trounced the competition on Jeopardy!

I’ll let Wikipedia summarize Watson:

Watson was created as a question answering (QA) computing system that IBM built to apply advanced natural language processing, information retrieval, knowledge representation, automated reasoning, and machine learning technologies to the field of open domain question answering.[2]

The sources of information for Watson include encyclopedias, dictionaries, thesauri, newswire articles, and literary works. Watson also used databases, taxonomies, and ontologies. …

Watson parses questions into different keywords and sentence fragments in order to find statistically related phrases.[22] Watson’s main innovation was not in the creation of a new algorithm for this operation but rather its ability to quickly execute hundreds of proven language analysis algorithms simultaneously.[22][24] The more algorithms that find the same answer independently the more likely Watson is to be correct.[22] Once Watson has a small number of potential solutions, it is able to check against its database to ascertain whether the solution makes sense or not.[22]

Kurzweil opines:

That is at least one reason why Watson represents such a significant milestone: Jeopardy! is precisely such a challenging language task. … What is perhaps not evident to many observers is that Watson not only had to master the language in the unexpected and convoluted queries, but for the most part its knowledge was not hand-coded. It obtained that knowledge by actually reading 200 million pages of natural-language documents, including all of Wikipedia… If Watson can understand and respond to questions based on 200 million pages–in three seconds!–here is nothing to stop similar systems from reading the other billions of documents on the Web. Indeed, that effort is now under way.

A point about the history of computing that may be petty of me to emphasize:

Babbage’s conception is quite miraculous when you consider the era in which he lived and worked. However, by the mid-twentieth century, his ideas had been lost in the mists of time (although they were subsequently rediscovered.) It was von Neumann who conceptualized and articulated the key principles of the computer as we know it today, and the world recognizes this by continuing to refer to the von Neumann machine as the principal model of computation. Keep in mind, though, that the von Neumann machine continually communicates data between its various units and within those units, so it could not be built without Shannon’s theorems and the methods he devised for transmitting and storing reliable digital information. …

You know what? No, it’s not petty.

Amazon lists 57 books about Ada Lovelace aimed at children, 14 about Alan Turing, and ZERO about John von Neumann.

(Some of these results are always irrelevant, but they are roughly correct.)

“EvX,” you may be saying, “Why are you counting children’s books?”

Because children are our future, and the books that get published for children show what society deems important for children to learn–and will have an effect on what adults eventually know.

I don’t want to demean Ada Lovelace’s role in the development of software, but surely von Neumann’s contributions to the field are worth a single book!

*Slides soapbox back under the table*

Anyway, back to Kurzweil, now discussing quantum mechanics:

There are two ways to view the questions we have been considering–converse Western an Easter perspective on the nature of consciousness and of reality. In the Western perspective, we start with a physical world that evolves patterns of information. After a few billion years of evolution, the entities in that world have evolved sufficiently to become conscious beings In the Eastern view, consciousness is the fundamental reality, the physical word only come into existence through the thoughts of conscious beings. …

The East-West divide on the issue of consciousness has also found expression in opposing schools of thought in the field of subatomic physics. In quantum mechanics, particles exist in what are called probability fields. Any measurement carried out on them by a measuring device causes what is called a collapse of the wave function, meaning that the particle suddenly assumes a particular location. A popular view is that such a measurement constitutes observation by a conscious observer… Thus the particle assume a particular location … only when it is observed. Basically particles figure that if no one is bothering to look at them, they don’t need to decide where they are. I call this the Buddhist school of quantum mechanics …

Niels Bohr

Or as Niels Bohr put it, “A physicist is just an atom’s way of looking at itself.” He also claimed that we could describe electrons exercised free will in choosing their positions, a statement I do not think he meant literally; “We must be clear that when it comes to atoms, language can be used only as in poetry,” as he put it.

Kurzweil explains the Western interpretation of quantum mechanics:

There is another interpretation of quantum mechanics… In this analysis, the field representing a particle is not a probability field, but rather just a function that has different values in different locations. The field, therefore, is fundamentally what the particle is. … The so-called collapse of the wave function, this view holds, is not a collapse at all. … It is just that a measurement device is also made up of particles with fields, and the interaction of the particle field being measured and the particle fields of the measuring device result in a reading of the particle being in a particular location. The field, however, is still present. This is the Western interpretation of quantum mechanics, although it is interesting to note that the more popular view among physicists worldwide is what I have called the Eastern interpretation.

Soviet atomic bomb, 1951

For example, Bohr has the yin-yang symbol on his coat of arms, along with the motto contraria sunt complementa, or contraries are complementary. Oppenheimer was such a fan of the Bhagavad Gita that he read it in Sanskrit and quoted it upon successful completion of the Trinity Test, “If the radiance of a thousand suns were to burst at once into the sky, that would be like the splendor of the mighty one,” and “Now I am become death, the destroyer of worlds.” He credited the Gita as one of the most important books in his life.

Why the appeal of Eastern philosophy? Is it something about physicists and mathematicians? Leibnitz, after all, was fond of the I Ching. As Wikipedia says:

Leibniz was perhaps the first major European intellectual to take a close interest in Chinese civilization, which he knew by corresponding with, and reading other works by, European Christian missionaries posted in China. Having read Confucius Sinarum Philosophus on the first year of its publication,[153] he concluded that Europeans could learn much from the Confucian ethical tradition. He mulled over the possibility that the Chinese characters were an unwitting form of his universal characteristic. He noted with fascination how the I Ching hexagrams correspond to the binary numbers from 000000 to 111111, and concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophical mathematics he admired.[154] Leibniz communicated his ideas of the binary system representing Christianity to the Emperor of China hoping it would convert him.[84] Leibniz may be the only major Western philosopher who attempted to accommodate Confucian ideas to prevailing European beliefs.[155]

Leibniz’s attraction to Chinese philosophy originates from his perception that Chinese philosophy was similar to his own.[153] The historian E.R. Hughes suggests that Leibniz’s ideas of “simple substance” and “pre-established harmony” were directly influenced by Confucianism, pointing to the fact that they were conceived during the period that he was reading Confucius Sinarum Philosophus.[153]

Perhaps it is just that physicists and mathematicians are naturally curious people, and Eastern philosophy is novel to a Westerner, or perhaps by adopting Eastern ideas, they were able to purge their minds of earlier theories of how the universe works, creating a blank space in which to evaluate new data without being biased by old conceptions–or perhaps it is just something about the way their minds work.

As for quantum, I favor the de Broglie-Bohm interpretation of quantum mechanics, but obviously I am not a physicist and my opinion doesn’t count for much. What do you think?

But back to the book. If you are fond of philosophical ruminations on the nature of consciousness, like “What if someone who could only see in black and white read extensively about color “red,” could they ever achieve the qualia of actually seeing the color red?” or “What if a man were locked in a room with a perfect Chinese rulebook that told him which Chinese characters to write in response to any set of characters written on notes passed under the door? The responses are be in perfect Chinese, but the man himself understands not a word of Chinese,” then you’ll enjoy the discussion. If you already covered all of this back in Philosophy 101, you might find it a bit redundant.

Kurzweil notes that conditions have improved massively over the past century for almost everyone on earth, but people are increasingly anxious:

A primary reason people believe life is getting worse is because our information about the problems of the world has steadily improved. If there is a battle today somewhere on the planet, we experience it almost as if we were there. During World War II, tens of thousand of people might perish in a battle, and if the public could see it at all it was in a grainy newsreel in a movie theater weeks later. During World War I a small elite could read about the progress of the conflict in the newspaper (without pictures.) During the nineteenth century there was almost no access to news in a timely fashion for anyone.

As for the future of man, machines, and code, Kurzweil is even more optimistic than Auerswald:

The last invention that biological evolution needed to make–the neocortex–is inevitably leading to the last invention that humanity needs to make–truly intelligent machines–and the design of one is inspiring the other. … by the end of this century we will be able to create computation at the limits of what is possible, based on the laws of physics… We call matter and energy organized in this way “computronium” which is vastly more powerful pound per pound than the human brain. It will not jut be raw computation but will be infused with intelligent algorithms constituting all of human-machine knowledge. Over time we will convert much of the mass and energy in our tiny corner of the galaxy that is suitable for this purpose to computronium. … we will need to speed out to the rest of the galaxy and universe. …

How long will it take for us to spread our intelligence in its nonbiological form throughout the universe? … waking up the universe, and then intelligently deciding its fate by infusing it with our human intelligence in its nonbiological form, is our destiny.

Whew! That is quite the ending–and with that, so will we. I hope you enjoyed the book. What did you think of it? Will Humanity 2.0 be good? Bad? Totally different? Or does the Fermi Paradox imply that Kurzweil is wrong? Did you like this shorter Book Club format? And do you have any ideas for our next Book Club pick?

Advertisements

Book Club: How to Create a Mind by Ray Kurzweil pt 1/2

Welcome to our discussion of Ray Kurzweil’s How to Create a Mind: The Secret of Human thought Revealed. This book was requested by one my fine readers; I hope you have enjoyed it.

If you aren’t familiar with Ray Kurzweil (you must be new to the internet), he is a computer scientist, inventor, and futurist whose work focuses primarily on artificial intelligence and phrases like “technological singularity.”

Wikipedia really likes him.

The book is part neuroscience, part explanations of how various AI programs work. Kurzweil uses models of how the brain works to enhance his pattern-recognition programs, and evidence from what works in AI programs to build support for theories on how the brain works.

The book delves into questions like “What is consciousness?” and “Could we recognize a sentient machine if we met one?” along with a brief history of computing and AI research.

My core thesis, which I call the Law of Accelerating Returns, (LOAR), is that fundamental measures of of information technology follow predictable and exponential trajectories…

I found this an interesting sequel to Auerswald’s The Code Economy and counterpart to Gazzaniga’s Who’s In Charge? Free Will and the Science of the Brain, which I listened to in audiobook form and therefore cannot quote very easily. Nevertheless, it’s a good book and I recommend it if you want more on brains.

The quintessential example of the law of accelerating returns is the perfectly smooth, doubly exponential growth of the price/performance of computation, which has held steady for 110 years through two world was, the Great Depression, the Cold War, the collapse of the Soviet Union, the reemergence of China, the recent financial crisis, … Some people refer to this phenomenon as “Moore’s law,” but… [this] is just one paradigm among many.

From Ray Kurzweil,

Auerswald claims that the advance of “code” (that is, technologies like writing that allow us to encode information) has, for the past 40,000 years or so, supplemented and enhanced human abilities, making our lives better. Auerswald is not afraid of increasing mechanization and robotification of the economy putting people out of jobs because he believes that computers and humans are good at fundamentally different things. Computers, in fact, were invented to do things we are bad at, like decode encryption, not stuff we’re good at, like eating.

The advent of computers, in his view, lets us concentrate on the things we’re good at, while off-loading the stuff we’re bad at to the machines.

Kurzweil’s view is different. While he agrees that computers were originally invented to do things we’re bad at, he also thinks that the computers of the future will be very different from those of the past, because they will be designed to think like humans.

A computer that can think like a human can compete with a human–and since it isn’t limited in its processing power by pelvic widths, it may well out-compete us.

But Kurzweil does not seem worried:

Ultimately we will create an artificial neocortex that has the full range and flexibility of its human counterpart. …

When we augment our own neocortex with a synthetic version, we won’t have to worry about how much additional neocortex can physically fit into our bodies and brains, as most of it will be in the cloud, like most of the computing we use today. I estimated earlier that we have on the order of 300 million pattern recognizers in our biological neocortex. That’s as much as could b squeezed into our skulls even with the evolutionary innovation of a large forehead and with the neocortex taking about 80 percent of the available space. As soon as we start thinking in the cloud, there will be no natural limits–we will be able to use billions or trillions of pattern recognizers, basically whatever we need. and whatever the law of accelerating returns can provide at each point in time. …

Last but not least, we will be able to back up the digital portion of our intelligence. …

That is kind of what I already do with this blog. The downside is that sometimes you people see my incomplete or incorrect thoughts.

On the squishy side, Kurzweil writes of the biological brain:

The story of human intelligence starts with a universe that is capable of encoding information. This was the enabling factor that allowed evolution to take place. …

The story of evolution unfolds with increasing levels of abstraction. Atoms–especially carbon atoms, which can create rich information structures by linking in four different directions–formed increasingly complex molecules. …

A billion yeas later, a complex molecule called DNA evolved, which could precisely encode lengthy strings of information and generate organisms described by these “programs”. …

The mammalian brain has a distinct aptitude not found in any other class of animal. We are capable of hierarchical thinking, of understanding a structure composed of diverse elements arranged in a pattern, representing that arrangement with a symbol, and then using that symbol as an element in a yet more elaborate configuration. …

I really want to know if squids or octopuses can engage in symbolic thought.

Through an unending recursive process we are capable of building ideas that are ever more complex. … Only Homo sapiens have a knowledge base that itself evolves, grow exponentially, and is passe down from one generation to another.

Kurzweil proposes an experiment to demonstrate something of how our brains encode memories: say the alphabet backwards.

If you’re among the few people who’ve memorized it backwards, try singing “Twinkle Twinkle Little Star” backwards.

It’s much more difficult than doing it forwards.

This suggests that our memories are sequential and in order. They can be accessed in the order they are remembered. We are unable to reverse the sequence of a memory.

Funny how that works.

On the neocortex itself:

A critically important observation about the neocortex is the extraordinary uniformity of its fundamental structure. … In 1957 Mountcastle discovered the columnar organization of the neocortex. … [In 1978] he described the remarkably unvarying organization of the neocortex, hypothesizing that it was composed of a single mechanism that was repeated over and over again, and proposing the cortical column as the basic unit. The difference in the height of certain layers in different region noted above are simply differences in the amount of interconnectivity that the regions are responsible for dealing with. …

extensive experimentation has revealed that there are in fact repeating units within each column. It is my contention that the basic unit is a pattern organizer and that this constitute the fundamental component of the neocortex.

As I read, Kurzweil’s hierarchical models reminded me of Chomsky’s theories of language–both Ray and Noam are both associated with MIT and have probably conversed many times. Kurzweil does get around to discussing Chomsky’s theories and their relationship to his work:

Language is itself highly hierarchical and evolved to take advantage of the hierarchical nature of the neocortex, which in turn reflects the hierarchical nature of reality. The innate ability of human to lean the hierarchical structures in language that Noam Chomsky wrote about reflects the structure of of the neocortex. In a 2002 paper he co-authored, Chomsky cites the attribute of “recursion” as accounting for the unique language faculty of the human species. Recursion, according to Chomsky, is the ability to put together small parts into a larger chunk, and then use that chunk as a part in yet another structure, and to continue this process iteratively In this way we are able to build the elaborate structure of sentences and paragraphs from a limited set of words. Although Chomsky was not explicitly referring here to brain structure, the capability he is describing is exactly what the neocortex does. …

This sounds good to me, but I am under the impression that Chomsky’s linguistic theories are now considered outdated. Perhaps that is only his theory of universal grammar, though. Any linguistics experts care to weigh in?

According to Wikipedia:

Within the field of linguistics, McGilvray credits Chomsky with inaugurating the “cognitive revolution“.[175] McGilvray also credits him with establishing the field as a formal, natural science,[176] moving it away from the procedural form of structural linguistics that was dominant during the mid-20th century.[177] As such, some have called him “the father of modern linguistics”.[178][179][180][181]

The basis to Chomsky’s linguistic theory is rooted in biolinguistics, holding that the principles underlying the structure of language are biologically determined in the human mind and hence genetically transmitted.[182] He therefore argues that all humans share the same underlying linguistic structure, irrespective of sociocultural differences.[183] In adopting this position, Chomsky rejects the radical behaviorist psychology of B. F. Skinner which views the mind as a tabula rasa (“blank slate”) and thus treats language as learned behavior.[184] Accordingly, he argues that language is a unique evolutionary development of the human species and is unlike modes of communication used by any other animal species.[185][186] Chomsky’s nativist, internalist view of language is consistent with the philosophical school of “rationalism“, and is contrasted with the anti-nativist, externalist view of language, which is consistent with the philosophical school of “empiricism“.[187][174]

Anyway, back to Kuzweil, who has an interesting bit about love:

Science has recently gotten into the act as well, and we are now able to identify the biochemical changes that occur when someone falls in love. Dopamine is released, producing feelings of happiness and delight. Norepinephrin levels soar, which lead to a racing heart and overall feelings of exhilaration. These chemicals, along with phenylethylamine, produce elevation, high energy levels, focused attention, loss of appetite, and a general craving for the object of one’s desire. … serotonin level go down, similar to what happens in obsessive-compulsive disorder….

If these biochemical phenomena sound similar to those of the flight-or-fight syndrome, they are, except that we are running toward something or someone; indeed, a cynic might say toward rather than away form danger. The changes are also fully consistent with those of the early phase of addictive behavior. …  Studies of ecstatic religious experiences also show the same physical phenomena, it can be said that the person having such an experiences is falling in love with God or whatever spiritual connection on which they are focused. …

Religious readers care to weigh in?

Consider two related species of voles: the prairie vole and the montane vole. They are pretty much identical, except that the prairie vole has receptors for oxytocin and vasopressin, whereas the montane vole does not. The prairie vole is noted for lifetime monogamous relationships, while the montane vole resorts almost exclusively to one-night stands.

Learning by species:

A mother rat will build a nest for her young even if she has never seen another rat in her lifetime. Similarly, a spider will spin a web, a caterpillar will create her own cocoon, and a beaver will build a damn, even if no contemporary ever showed them how to accomplish these complex tasks. That is not to say that these are not learned behavior. It is just that he animals did not learn them in a single lifetime… The evolution of animal behavior does constitute a learning process, but it is learning by the species, not by the individual and the fruits of this leaning process are encoded in DNA.

I think that’s enough for today; what did you think? Did you enjoy the book? Is Kurzweil on the right track with his pattern recognizers? Are non-biological neocortexes on the horizon? Will we soon convert the solar system to computronium?

Let’s continue this discussion next Monday–so if you haven’t read the book yet, you still have a whole week to finish.

 

Book Club: The Code Economy, Chapter 11: Education and Death

Welcome back to EvX’s book club. Today we’re reading Chapter 11 of The Code Economy, Education.

…since the 1970s, the economically fortunate among us have been those who made the “go to college” choice. This group has seen its income row rapidly and its share of the aggregate wealth increase sharply. Those without a college education have watched their income stagnate and their share of the aggregate wealth decline. …

Middle-age white men without a college degree have been beset by sharply rising death rates–a phenomenon that contrasts starkly with middle-age Latino and African American men, and with trends in nearly every other country in the world.

It turns out that I have a lot of graphs on this subject. There’s a strong correlation between “white death” and “Trump support.”

White vs. non-white Americans

American whites vs. other first world nations

source

But “white men” doesn’t tell the complete story, as death rates for women have been increasing at about the same rate. The Great White Death seems to be as much a female phenomenon as a male one–men just started out with higher death rates in the first place.

Many of these are deaths of despair–suicide, directly or through simply giving up on living. Many involve drugs or alcohol. And many are due to diseases, like cancer and diabetes, that used to hit later in life.

We might at first think the change is just an artifact of more people going to college–perhaps there was always a sub-set of people who died young, but in the days before most people went to college, nothing distinguished them particularly from their peers. Today, with more people going to college, perhaps the destined-to-die are disproportionately concentrated among folks who didn’t make it to college. However, if this were true, we’d expect death rates to hold steady for whites overall–and they have not.

Whatever is affecting lower-class whites, it’s real.

Auerswald then discusses the “Permanent income hypothesis”, developed by Milton Friedman: Children and young adults devote their time to education, (even going into debt,) which allows us to get a better job in mid-life. When we get a job, we stop going to school and start saving for retirement. Then we retire.

The permanent income hypothesis is built into the very structure of our society, from Public Schools that serve students between the ages of 5 and 18, to Pell Grants for college students, to Social Security benefits that kick in at 65. The assumption, more or less, is that a one-time investment in education early in life will pay off for the rest of one’s life–a payout of such returns to scale that it is even sensible for students and parents to take out tremendous debt to pay for that education.

But this is dependent on that education actually paying off–and that is dependent on the skills people learn during their educations being in demand and sufficient for their jobs for the next 40 years.

The system falls apart if technology advances and thus job requirements change faster than once every 40 years. We are now looking at a world where people’s investments in education can be obsolete by the time they graduate, much less by the time they retire.

Right now, people are trying to make up for the decreasing returns to education (a highschool degree does not get you the same job today as it did in 1950) by investing more money and time into the single-use system–encouraging toddlers to go to school on the one end and poor students to take out more debt for college on the other.

This is probably a mistake, given the time-dependent nature of the problem.

The obvious solution is to change how we think of education and work. Instead of a single, one-time investment, education will have to continue after people begin working, probably in bursts. Companies will continually need to re-train workers in new technology and innovations. Education cannot be just a single investment, but a life-long process.

But that is hard to do if people are already in debt from all of the college they just paid for.

Auerswald then discusses some fascinating work by Bessen on how the industrial revolution affected incomes and production among textile workers:

… while a handloom weaver in 1800 required nearly forty minutes to weave a yard of coarse cloth using a single loom, a weaver in 1902 could do the same work operating eighteen Nothrop looms in less than a minute, on average. This striking point relates to the relative importance of the accumulation of capital to the advance of code: “Of the roughly thirty-nine-minute reduction in labor time per yard, capital accumulation due to the changing cost of capital relative to wages accounted for just 2 percent of the reduction; invention accounted for 73 percent of the reduction; and 25 percent of the time saving came from greater skill and effort of the weavers.” … “the role of capital accumulation was minimal, counter to the conventional wisdom.”

Then Auerswald proclaims:

What was the role of formal education in this process? Essentially nonexistent.

Boom.

New technologies are simply too new for anyone to learn about them in school. Flexible thinkers who learn fast (generalists) thus benefit from new technologies and are crucial for their early development. Once a technology matures, however, it becomes codified into platforms and standards that can be taught, at which point demand for generalists declines and demand for workers with educational training in the specific field rises.

For Bessen, formal education and basic research are not the keys to the development of economies that they are often represented a being. What drives the development of economies is learning by doing and the advance of code–processes that are driven at least as much by non-expert tinkering as by formal research and instruction.

Make sure to read the endnotes to this chapter; several of them are very interesting. For example, #3 begins:

“Typically, new technologies demand that a large number of variables be properly controlled. Henry Bessemer’s simple principle of refining molten iron with a blast of oxygen work properly only at the right temperatures, in the right size vessel, with the right sort of vessel refractory lining, the right volume and temperature of air, and the right ores…” Furthermore, the products of these factories were really one that, in the United States, previously had been created at home, not by craftsmen…

#8 states:

“Early-stage technologies–those with relatively little standardized knowledge–tend to be used at a smaller scale; activity is localized; personal training and direct knowledge sharing are important, and labor markets do not compensate workers for their new skills. Mature technologies–with greater standardized knowledge–operate at large scale and globally, market permitting; formalized training and knowledge are more common; and robust labor markets encourage workers to develop their own skills.” … The intensity of of interactions that occur in cities is also important in this phase: “During the early stages, when formalized instruction is limited, person-to-person exchange is especially important for spreading knowledge.”

This reminds me of a post on Bruce Charlton’s blog about “Head Girl Syndrome“:

The ideal Head Girl is an all-rounder: performs extremely well in all school subjects and has a very high Grade Point Average. She is excellent at sports, Captaining all the major teams. She is also pretty, popular, sociable and well-behaved.

The Head Girl will probably be a big success in life…

But the Head Girl is not, cannot be, a creative genius.

*

Modern society is run by Head Girls, of both sexes, hence there is no place for the creative genius.

Modern Colleges aim at recruiting Head Girls, so do universities, so does science, so do the arts, so does the mass media, so does the legal profession, so does medicine, so does the military…

And in doing so, they filter-out and exclude creative genius.

Creative geniuses invent new technologies; head girls oversee the implementation and running of code. Systems that run on code can run very smoothly and do many things well, but they are bad at handling creative geniuses, as many a genius will inform you of their public school experience.

How different stages in the adoption of new technology and its codification into platforms translates into wages over time is a subject I’d like to see more of.

Auerswald then turns to the perennial problem of what happens when not only do the jobs change, they entirely disappear due to increasing robotification:

Indeed, many of the frontier business models shaping the economy today are based on enabling a sharp reduction in the number of people required to perform existing tasks.

One possibility Auerswald envisions is a kind of return to the personalized markets of yesteryear, when before massive industrial giants like Walmart sprang up. Via internet-based platforms like Uber or AirBNB, individuals can connect directly with people who’d like to buy their goods or services.

Since services make up more than 84% of the US economy and an increasingly comparable percentage in coutnries elsewhere, this is a big deal.

It’s easy to imagine this future in which we are all like some sort of digital Amish, continually networked via our phones to engage in small transactions like sewing a pair of trousers for a neighbor, mowing a lawn, selling a few dozen tacos, or driving people to the airport during a few spare hours on a Friday afternoon. It’s also easy to imagine how Walmart might still have massive economies of scale over individuals and the whole system might fail miserably.

However, if we take the entrepreneurial perspective, such enterprises are intriguing. Uber and Airbnb work by essentially “unlocking” latent assets–time when people’s cars or homes were sitting around unused. Anyone who can find other, similar latent assets and figure out how to unlock them could become similarly successful.

I’ve got an underutilized asset: rural poor. People in cities are easy to hire and easy to direct toward educational opportunities. Kids growing up in rural areas are often out of the communications loop (the internet doesn’t work terribly well in many rural areas) and have to drive a long way to job interviews.

In general, it’s tough to network large rural areas in the same ways that cities get networked.

On the matter of why peer-to-peer networks have emerged in certain industries, Auerswald makes a claim that I feel compelled to contradict:

The peer-to-peer business models in local transportation, hospitality, food service, and the rental of consumer goods were the first to emerge, not because they are the most important for the economy but because these are industries with relatively low regulatory complexity.

No no no!

Food trucks emerged because heavy regulations on restaurants (eg, fire code, disability access, landscaping,) have cut significantly into profits for restaurants housed in actual buildings.

Uber emerged because the cost of a cab medallion–that is, a license to drive a cab–hit 1.3 MILLION DOLLARS in NYC. It’s a lucrative industry that people were being kept out of.

In contrast, there has been little peer-to-peer business innovation in healthcare, energy, and education–three industries that comprise more than a quarter of the US GDP–where regulatory complexity is relatively high.

Again, no.

There is a ton of competition in healthcare; just look up naturopaths and chiropractors. Sure, most of them are quacks, but they’re definitely out there, competing with regular doctors for patients. (Midwives appear to be actually pretty effective at what they do and significantly cheaper than standard ob-gyns.)

The difficulty with peer-to-peer healthcare isn’t regulation but knowledge and equipment. Most Americans own a car and know how to drive, and therefore can join Uber. Most Americans do not know how to do heart surgery and do not have the proper equipment to do it with. With training I might be able to set a bone, but I don’t own an x-ray machine. And you definitely don’t want me manufacturing my own medications. I’m not even good at making soup.

Education has tons of peer-to-peer innovation. I homeschool my children. Sometimes grandma and grandpa teach the children. Many homeschoolers join consortia that offer group classes, often taught by a knowledgeable parent or hired tutor. Even people who aren’t homeschooling their kids often hire tutors, through organizations like Wyzant or afterschool test-prep centers like Kumon. One of my acquaintances makes her living primarily by skype-tutoring Koreans in English.

And that’s not even counting private schools.

Yes, if you want to set up a formal “school,” you will encounter a lot of regulation. But if you just want to teach stuff, there’s nothing stopping you except your ability to find students who’ll pay you to learn it.

Now, energy is interesting. Here Auerswsald might be correct. I have trouble imagining people setting up their own hydroelectric dams without getting into trouble with the EPA (not to mention everyone downstream.)

But what if I set up my own windmill in my backyard? Can I connect it to the electric grid and sell energy to my neighbors on a windy day? A quick search brings up WindExchange, which says, very directly:

Owners of wind turbines interconnected directly to the transmission or distribution grid, or that produce more power than the host consumes, can sell wind power as well as other generation attributes.

So, maybe you can’t set up your own nuclear reactor, and maybe the EPA has a thing about not disturbing fish, but it looks like you can sell wind and solar energy back to the grid.

I find this a rather exciting thought.

Ultimately, while Auerswald does return to and address the need to radically change how we think about education and the education-job-retirement lifepath, he doesn’t return to the increasing white death rate. Why are white death rates increasing faster than other death rates, and will transition to the “gig economy” further accelerate this trend? Or was the past simply anomalous for having low white death rates, or could these death rates be driven by something independent of the economy itself?

Now, it’s getting late, so that’s enough for tonight, but what are your thoughts? How do you think this new economy–and educational landscape–will play out?

Book Club: Code Economy, Ch. 10: In which I am Confused

Welcome back to EvX’s Book Club. Today we start the third (and final) part of Auerswald’s The Code Economy: The Human Advantage.

Chapter 10: Complementarity discuses bifurcation, a concept Auerswald mentions frequently throughout the book. He has a graph of the process of bifurcation, whereby the development of new code (ie, technology), leads to the creation of a new “platform” on the one hand, and new human work on the other. With each bifurcation, we move away from the corner of the graph marked “simplicity” and “autonomy,” and toward the corner marked “complexity” and “interdependence.” It looks remarkably like a graph I made about energy inputs vs outputs at different complexity levels, based on a memory of a graph I saw in a textbook some years ago.

There are some crucial differences between our two graphs, but I think they nonetheless related–and possibly trying to express the same thing.

Auerswald argues that as code becomes platform, it doesn’t steal jobs, but becomes the new base upon which people work. The Industrial Revolution eliminated the majority of farm laborers via automation, but simultaneously provided new jobs for them, in factories. Today, the internet is the “platform” where jobs are being created, not in building the internet, but via businesses like Uber that couldn’t exist without the internet.

Auerswald’s graph (not mine) is one of the few places in the book where he comes close to examining the problem of intelligence. It is difficult to see what unintelligent people are going to do in a world that is rapidly becoming more complicated.

On the other hand people who didn’t have access to all sorts of resources now do, due to internet-based platforms–people in the third world, for example, who never bought land-line telephones because their country couldn’t afford to build the infrastructure to support them, are snapping up mobile and smartphones at an extraordinary rate:

And overwhelming majorities in almost every nation surveyed report owning some form of mobile device, even if they are not considered “smartphones.”

And just like Auerswald’s learning curves from the last chapter, technological spread is speeding up. It took the landline telephone 64 years to go from 0% to 40% of the US market. Mobile phones took only 20 years to accomplish the same feat, and smartphones did it in about 10. (source.)

There are now more mobile phones in the developing world than the first world, and people aren’t just buying just buying these phones to chat. People who can’t afford to open bank accounts now use their smarphones as “mobile wallets”:

According to the GSMA, an industry group for the mobile communications business, there are now 79 mobile money systems globally, mostly in Africa and Asia. Two-thirds of them have been launched since 2009.

To date, the most successful example is M-Pesa, which Vodafone launched in Kenya in 2007. A little over three years later, the service has 13.5 million users, who are expected to send 20 percent of the country’s GDP through the system this year. “We proved at Vodafone that if you get the proposition right, the scale-up is massive,” says Nick Hughes, M-Pesa’s inventor.

But let’s get back to Auerswald. Chapter 10 contains a very interesting description of the development of the development of the Swiss Watch industry. Of course, today, most people don’t go out of their way to buy watches, since their smartphones have clocks built into them. Have smartphones put the Swiss out of business? Not quite, says Auerswald:

Switzerland… today produces fewer than 5 percent of the timepieces manufactured for export globally. In 2014, Switzerland exported 29 million watches, as compaed to China’ 669 million… But what of value? … Swiss watch exports were worth 24.3 billion in 2014, nearly five times as much as all Chinese watches combined.

Aside from the previously mentioned bifurcation of human and machine labor, Auerswald suggests that automation bifurcates products into cheap and expensive ones. He claims that movies, visual art services (ie, copying and digitization of art vs. fine art,) and music have also undergone bifurcation, not extinction, due to new technology.

In each instance, disruptive advances in code followed a consistent and predictable pattern: the creation of a new high-volume, low-price option creates a new market for the low-volume, high-price option. Every time this happens, the new value created through improved code forces a bifurcation of markets, and of work.

Detroit

He then discusses a watch-making startup located in Detroit, which I feel completely and totally misses the point of whatever economic lessons we can draw from Detroit.

Detroit is, at least currently, a lesson in how people fail to deal with increasing complexity, much less bifurcation.

Even that word–bifurcation–contains a problem: what happens to the middle? A huge mass of people at the bottom, making and consuming cheap products, and a small class at the top, making and consuming expensive products–well I will honor the demonstrated preferences of everyone involved for stuff, of whatever price, but what about the middle?

Is this how the middle class dies?

But if the poor become rich enough… does it matter?

Because work is fundamentally algorithmic, it is capable of almost limitless diversification though both combinatorial and incremental change. The algorithms of work become, fairly literally, the DNA of the economy. …

As Geoff Moore puts it, “Digital innovation is reengineering our manufacturing-based product-centric economy to improve quality, reduce cost, expand markets, … It is doing so, however, largely at the expense of traditional middle class jobs. This class of work is bifurcating into elite professions that are highly compensated but outside the skillset of the target population and commoditizing workloads for which the wages fall well below the target level.”

It is easy to take the long view and say, “Hey, the agricultural revolution didn’t result in massive unemployment among hunter-gatherers; the bronze and iron ages didn’t result in unemployed flint-knappers starving in the streets, so we’ll probably survive the singularity, too,” and equally easy to take the short view and say, “screw the singularity, I need a job that pays the bills now.”

Auerswald then discusses the possibilities for using big data and mobile/wearable computers to bring down healthcare costs. I am also in the middle of a Big Data reading binge, and my general impression of health care is that there is a ton of data out there (and more being collected every day,) but it is unwieldy and disorganized and doctors are too busy to use most of it and patients don’t have access to it. and if someone can amass, organize, and sort that data in useful ways, some very useful discoveries could be made.

Then we get to the graph that I didn’t understand,”Trends in Nonroutine Task Input, 1960 to 1998,” which is a bad sign for my future employment options in this new economy.

My main question is what is meant by “nonroutine manual” tasks, and since these were the occupations with the biggest effect shown on the graph, why aren’t they mentioned in the abstract?:

We contend that computer capital (1) substitutes for a limited and well-defined set of human activities, those involving routine (repetitive) cognitive and manual tasks; and (2) complements activities involving non-routine problem solving and interactive tasks. …Computerization is associated with declining relative industry demand for routine manual and cognitive tasks and increased relative demand for non-routine cognitive tasks.

Yes, but what about the non-routine manual? What is that, and why did it disappear first? And does this graph account for increased offshoring of manufacturing jobs to China?

If you ask me, it looks like there are three different events recorded in the graph, not just one. First, from 1960 onward, “non-routine manual” jobs plummet. Second, from 1960 through 1970, “routine cognitive” and “routine manual” jobs increase faster than “non-routine analytic” and “non-routine interactive.” Third, from 1980 onward, the routine jobs head downward while the analytic and interactive jobs become more common.

*Downloads the PDF and begins to read* Here’s the explanation of non-routine manual:

Both optical recognition of objects in a visual field and bipedal locomotion across an uneven surface appear to require enormously sophisticated algorithms, the one in optics and the other in mechanics, which are currently poorly understood by cognitive science (Pinker, 1997). These same problems explain the earlier mentioned inability of computers to perform the tasks of long haul truckers.

In this paper we refer to such tasks requiring visual and manual skills as ‘non-routine manual activities.’

This does not resolve the question.

Discussion from the paper:

Trends in routine task input, both cognitive and manual, also follow a striking pattern. During the  1960s, both forms of input increased due to a combination of between- and within-industry shifts. In the 1970s, however, within-industry input of both tasks declined, with the rate of decline accelerating.

As distinct from the other four task measures, we observe steady within- and between-industry shifts against non-routine manual tasks for the entire four decades of our sample. Since our conceptual framework indicates that non-routine manual tasks are largely orthogonal to computerization, we view
this pattern as neither supportive nor at odds with our model.

Now, it’s 4 am and the world is swimming a bit, but I think “we aren’t predicting any particular effect on non-routine manual tasks” should have been stated up front in the thesis portion. Sticking it in here feels like ad-hoc explaining away of a discrepancy. “Well, all of the other non-routine tasks went up, but this one didn’t, so, well, it doesn’t count because they’re hard to computerize.”

Anyway, the paper is 62 pages long, including the tables and charts, and I’m not reading it all or second-guessing their math at this hour, but I feel like there is something circular in all of this–“We already know that jobs involving routine labor like manufacturing are down, so we made a models saying they decreased as a percent of jobs because of computers and automation, looked through jobs data, and low and behold, found that they had decreased. Confusingly, though, we also found that non-routine manual jobs decreased during this time period, even though they don’t lend themselves to automation and computerization.”

I also searched in the document and could find no instance of the words “offshor-” “China” “export” or “outsource.”

Also, the graph Auerswald uses and the corresponding graph in the paper have some significant differences, especially the “routine cognitive” line. Maybe the authors updated their graph with more data, or Auerswald was trying to make the graph clearer. I don’t know.

Whatever is up with this paper, I think we may provisionally accept its data–fewer factory workers, more lawyers–without necessarily accepting its model.

The day after I wrote this, I happened to be reading Davidowitz’s Everybody Lies: Big Data, New Data, and What the Internet Can Tell us about who we Really Are, which has a discussion of the best places to raise children.

Talking about Chetty’s data, Davidowitz writes:

The question asked: what is the chance that a person with parents in the bottom 20 percent of the income distribution reaches the top 20 percent of the income distribution? …

So what is it about part of the United States where there is high income mobility? What makes some places better at equaling the playing field, of allowing a poor kid to have a pretty good life? Areas that spend more on education provide a better chance to poor kids. Places with more religious people and lower crime do better. Places with more black people do worse. Interestingly, this has an effect on not just the black kids but on the white kids living there as well.

Here is Chetty’s map of upward mobility (or the lack thereof) by county. Given how closely it matches a map of “African Americans” + “Native Americans” I have my reservations about the value of Chetty’s research on the bottom end (is anyone really shocked to discover that black kids enjoy little upward mobility?) but it still has some comparative value.

Davidowitz then discusses Chetty’s analysis of where people live the longest:

Interestingly, for the wealthiest Americans, life expectancy is hardly affected by where you live. …

For the poorest Americans, life expectancy varies tremendously…. living in the right place can add five years to a poor person’s life expectancy. …

religion, environment, and health insurance–do not correlate with longer life spans for the poor. The variable that does matter, according to Chetty and the others who worked on this study? How many rich people live in a city. More rich people in a city means the poor there live longer. Poor people in New York City, for example, live longer than poor people in Detroit.

Davidowitz suggests that maybe this happens because the poor learn better habits from the rich. I suspect the answer is simpler–here are a few possibilities:

1. The rich are effectively stopping the poor from doing self-destructive things, whether positively, eg, funding cultural that poor people go to rather than turn to drugs or crime out of boredom, or negatively, eg, funding police forces that discourage life-shortening crime.

2. The rich fund/support projects that improve general health, like cleaner water systems or better hospitals.

3. The effect is basically just a measurement error that doesn’t account for rich people driving up land prices. The “poor” of New York would be wealthier if they had Detroit rents.

(In general, I think Davidowitz is stronger when looking for correlations in the data than when suggesting explanations for it.)

Now contrast this with Davidowitz’s own study on where top achievers grow up:

I was curious where the most successful Americans come from, so one day I decided to download Wikipedia. …

[After some narrowing for practical reasons] Roughly 2,058 American-born baby boomers were deemed notable enough to warrant a Wikipedia entry. About 30 percent made it through achievements in art or entertainment, 29 percent through sports, 9 percent via politics, and 3 percent in academia or science.

And this is why we are doomed.

The first striking fact I noticed in the data was the enormous geographic variation in the likelihood of becoming a big success …

Roughly one in 1,209 baby boomers born in California reached Wikipedia. Only one in 4,496 baby boomers born in West Virginia did. … Roughly one in 748 baby boomers born in Suffolk County, MA, here Boston is located, made it to Wikipedia. In some counties, the success rate was twenty times lower. …

I closely examined the top counties. It turns out that nearly all of them fit into one of two categories.

First, and this surprised me, many of these counties contained a sizable college town. …

I don’t know why that would surprise anyone. But this was interesting:

Of fewer than 13,000 boomers born in Macon County, Alabama, fifteen made it to Wikipedia–or one in 852. Every single one of them is black. Fourteen of them were from the town of Tuskegee, home of Tuskegee University, a historically black college founded by Booker . Washington. The list included judges, writers, and scientists. In fact, a black child born in Tuskegee had the same probability of becoming a notable in a field outside of ports as a white child born in some of the highest-scoring, majority-white college towns.

The other factor that correlates with the production of notables?

A big city.

Being born in born in San Francisco County, Los Angeles County, or New York City all offered among the highest probabilities of making it to Wikipedia. …

Suburban counties, unless they contained major college towns, performed far worse than their urban counterparts.

A third factor that correlates with success is the proportion of immigrants in a county, though I am skeptical of this finding because I’ve never gotten the impression that the southern border of Texas produces a lot of famous people.

Migrant farm laborers aside, though, America’s immigrant population tends to be pretty well selected overall and thus produces lots of high-achievers. (Steve Jobs, for example, was the son of a Syrian immigrant; Thomas Edison was the son of a Canadian refugee.)

The variable that didn’t predict notability:

One I found more than a little surprising was how much money a state spends on education. In states with similar percentages of its residents living in urban areas, education spending did not correlate with rates of producing notable writers, artists, or business leaders.

Of course, this is probably because 1. districts increase spending when students do poorly in school, and 2. because rich people in urban send their kids to private schools.

BUT:

It is interesting to compare my Wikipedia study to one of Chetty’s team’s studies discussed earlier. Recall that Chetty’s team was trying to figure out what areas are good at allowing people to reach the upper middle class. My study was trying to figure out what areas are good at allowing people to reach fame. The results are strikingly different.

Spending a lot on education help kids reach the upper middle class. It does little to help them become a notable writer, artist, or business leader. Many of these huge successes hated school. Some dropped out.

Some, like Mark Zuckerberg, went to private school.

New York City, Chetty’s team found, is not a particularly good place to raise a child if you want to ensure he reaches the upper middle class. it is a great place, my study found, if you want to give him a chance at fame.

A couple of methodological notes:

Note that Chetty’s data not only looked at where people were born, but also at mobility–poor people who moved from the Deep South to the Midwest were also more likely to become upper middle class, and poor people who moved from the Midwest to NYC were also more likely to stay poor.

Davidowitz’s data only looks at where people were born; he does not answer whether moving to NYC makes you more likely to become famous. He also doesn’t discuss who is becoming notable–are cities engines to make the children of already successful people becoming even more successful, or are they places where even the poor have a shot at being famous?

I reject Davidowitz’s conclusions (which impute causation where there is only correlation) and substitute my own:

Cities are acceleration platforms for code. Code creates bifurcation. Bifurcation creates winners and losers while obliterating the middle.

This is not necessarily a problem if your alternatives are worse–if your choice is between poverty in NYC or poverty in Detroit, you may be better off in NYC. If your choice is between poverty in Mexico and poverty in California, you may choose California.

But if your choice is between a good chance of being middle class in Salt Lake City verses a high chance of being poor and an extremely small chance of being rich in NYC, you are probably a lot better off packing your bags and heading to Utah.

But if cities are important drivers of innovation (especially in science, to which we owe thanks for things like electricity and refrigerated food shipping,) then Auerswald has already provided us with a potential solution to their runaway effects on the poor: Henry George’s land value tax. As George accounts, one day, while overlooking San Francisco:

I asked a passing teamster, for want of something better to say, what land was worth there. He pointed to some cows grazing so far off that they looked like mice, and said, “I don’t know exactly, but there is a man over there who will sell some land for a thousand dollars an acre.” Like a flash it came over me that there was the reason of advancing poverty with advancing wealth. With the growth of population, land grows in value, and the men who work it must pay more for the privilege.[28]

Alternatively, higher taxes on fortunes like Zuckerberg’s and Bezos’s might accomplish the same thing.

A Modest Educational Proposal

Source

Fellow humans, we have a problem. (And another problem.)

At least, this looks like a problem to me., especially when I’m trying to make conversation at the local moms group.

There are many potential reasons the data looks like this (including inaccuracy, though my lived experience says it is accurate.) Our culture encourages people to limit their fertility, and smart women are especially so encouraged. Smart people are also better at long-term planning and doing things like “reading the instructions on the birth control.”

But it seems likely that there is another factor, an arrow of causation pointing in the other direction: smart people tend to stay in school for longer, and people dislike having children while they are still in school. While you are in school, you are in some sense still a child, and we have a notion that children shouldn’t beget children.

Isaac Newton. Never married. Probably a virgin.

People who drop out of school and start having children at 16 tend not to be very smart and also tend to have plenty of children during their child-creating years. People who pursue post-docs into their thirties tend to be very smart–and many of them are virgins.

Now, I don’t know about you, but I kind of like having smart people around, especially the kinds of people who invent refrigerators and make supply chains work so I can enjoy eating food, even though I live in a city, far from any farms. I don’t want to live in a world where IQ is crashing and we can no longer maintain complex technological systems.

We need to completely re-think this system where the smarter you are, the longer you are expected to stay in school, accruing debt and not having children.

Proposal one: Accelerated college for bright students. Let any student who can do college-level work begin college level work for college credits, even if they are still in high (or middle) school. There are plenty of bright students out there who could be completing their degrees by 18.

The entirely framework of schooling probably ought to be sped up in a variety of ways, especially for bright students. The current framework often reflects the order in which various discoveries were made, rather than the age at which students are capable of learning the material. For example, negative numbers are apparently not introduced in the math curriculum until 6th grade, even though, in my experience, even kindergarteners are perfectly capable of understanding the concept of “debt.” If I promise to give you one apple tomorrow, then I have “negative one apple.” There is no need to hide the concept of negatives for 6 years.

Proposal two: More apprenticeship.

In addition to being costly and time-consuming, a college degree doesn’t even guarantee that your chosen field will still be hiring when you graduate. (I know people with STEM degrees who graduated right as the dot.com bubble burst. Ouch.) We essentially want our educational system to turn out people who are highly skilled at highly specialized trades, and capable of turning around and becoming highly skilled at another highly specialized trade on a dime if that doesn’t work out. This leads to chemists returning to university to get law degrees; physicists to go back for medical degrees. We want students to have both “broad educations” so they can get hired anywhere, and “deep educations” so they’ll actually be good at their jobs.

Imagine, instead, a system where highschool students are allowed to take a two-year course in preparation for a particular field, at the end of which high performers are accepted into an apprenticeship program where the continue learning on the job. At worst, these students would have a degree, income, and job experience by the age of 20, even if they decided they now wanted to switch professions or pursue an independent education.

Proposal three: Make childbearing normal for adult students.

There’s no reason college students can’t get married and have children (aside from, obviously, their lack of jobs and income.) College is not more time consuming or physically taxing than regular jobs, and college campuses tend to be pretty pleasant places. Studying while pregnant isn’t any more difficult than working while pregnant.

Grad students, in particular, are old and mature enough to get married and start families, and society should encourage them to do so.

Proposal four: stop denigrating child-rearing, especially for intelligent women.

Children are a lot of work, but they’re also fun. I love being with my kids. They are my family and an endless source of happiness.

What people want and value, they will generally strive to obtain.

 

These are just some ideas. What are yours?

Book Club: The Code Economy: The DNA of Business

“DNA builds products with a purpose. So do people.” –Auerswald, The Code Economy

McDonald’s is the world’s largest restaurant chain by revenue[7], serving over 69 million customers daily in over 100 countries[8] across approximately 36,900 outlets as of 2016.[9] … According to a BBC report published in 2012, McDonald’s is the world’s second-largest private employer (behind Walmart with 1.9 million employees), 1.5 million of whom work for franchises. …

There are currently a total of 5,669 company-owned locations and 31,230 franchised locations… Notably, McDonald’s has increased shareholder dividends for 25 consecutive years,[18] making it one of the S&P 500 Dividend Aristocrats.[19][20]

According to Fast Food Nation by Eric Schlosser (2001), nearly one in eight workers in the U.S. have at some time been employed by McDonald’s. … Fast Food Nation also states that McDonald’s is the largest private operator of playgrounds in the U.S., as well as the single largest purchaser of beef, pork, potatoes, and apples.  (Wikipedia)

How did a restaurant whose only decent products are french fries and milkshakes come to dominate the global corporate landscape?

IKEA is not only the world’s largest furniture store, but also among the globe’s top 10 retailers of anything and the 25th most beloved corporation. (Disney ranks number one.) Even I feel a strange, heartwarming emotion at the thought of IKEA, which somehow comes across as a sweet and kind multi-national behemoth.

In The Code Economy, Auerswald suggests that the secret to McDonald’s success isn’t (just) the french fries and milkshake machines:

Kroc opened his first McDonald’s restaurant in 1955 in Des Plaines, California. Within five years he had opened two hundred new franchises across the country. [!!!] He pushed his operators obsessively to adhere to a system that reinforced the company motto: “Quality, service, cleanliness, and value.”

h/t @simongerman600

Quoting Kroc’s1987 autobiography,

“It’s all interrelated–our development of the restaurant, the training, the marketing advice, the product development, the research that has gone into each element of the equipment package. Together with our national advertising and continuing supervisory assistance, it forms an invaluable support system. Individual operators pay 11.5 percent of their gross to the corporation for all of this…”

The process of operating a McDonald’s franchise was engineered to be as cognitively undemanding as possible. …

Kroc created a program that could be broken into subroutines…. Acting like the DNA of the organization, the manual allowed the Speedee Service System to function in a variety of environments without losing essential structure or function.

McDonald’s is big because it figured out how to reproduce.

source: Statista

I’m not sure why IKEA is so big (I don’t think it’s a franchise like McDonald’s,) but based on the information posted on their walls, it’s because of their approach to furniture design. First, think of a problem, eg, People Need Tables. Second, determine a price–IKEA makes some very cheap items and some pricier items, to suit different customers’ needs. Third, use Standard IKEA Wooden Pieces to design a nice-looking table. Fourth, draw the assembly instructions, so that anyone, anywhere, can assemble the furniture themselves–no translation needed.

IKEA furniture is kind of like Legos, in that much of it is made of very similar pieces of wood assembled in different ways. The wooden boards in my table aren’t that different in size and shape from the ones in my dresser nor the ones in my bookshelf, though the items themselves have pretty different dimensions. So on the production side, IKEA lowers costs by producing not actual furniture, but collections of boards. Boards are easy to make–sawmills produce tons of them.

Furniture is heavy, but mostly empty space. By contrast, piles of boards stack very neatly and compactly, saving space both in shipping and when buyers are loading the boxes into their cars. (I am certain that IKEA accounts for common car dimensions in designing and packing their furniture.)

And the assembly instruction allow the buyer to ultimately construct the furniture.

In other words, IKEA has hit upon a successful code that allows them to produce many different designs from a few basic boards and ship them efficiently–keeping costs low and allowing them to thrive.

From Anatomy of an IKEA product:

The company is also looking for ways to maximize warehouse efficiency.

“We have (only) two pallet sizes,” Marston said, referring to the wooden platforms on which goods are placed. “Our warehouses are dimensioned and designed to hold these two pallet sizes. It’s all about efficiencies because that helps keep the price of innovation down.”

In Europe, some IKEA warehouses utilize robots to “pick the goods,” a term of art for grabbing products off very high shelves.

These factories, Marston said, are dark, since no lighting is needed for the robots, and run 24 hours a day, picking and moving goods around.

“You (can) stand on a catwalk,” she said, “and you look out at this huge warehouse with 12 pallets (stacked on top of each other) and this robot’s running back and forth running on electronic eyebeams.”

IKEA’s code and McDonald’s code are very different, but both let the companies produce the core items they sell quickly, cheaply, and efficiently.

In The Code Economy, Chapter 8: Evolution, discusses the rise of Tollhouse Cookies, McDonald’s, the difference between natural and artificial objects, and the development of evolutionary theory from Darwin through Watson and Crick and through to Kauffman and Levine’s 1987 paper, “Toward a General Theory of Adaptive Walks on Rugged Landscapes.” (With a brief stop at Erwin Shrodinger along the way.)

The difficulty with evolution is that systems are complicated; successful mutations or even just combinations of existing genes must work synergistically with all of the other genes and systems already operating in the body. A mutation that increases IQ by tweaking neurons in a particular way might have the side effect of causing neurons outside the brain to malfunction horribly; a mutation that protects against sickle-cell anemia when you have one copy of it might just kill you itself if you have two copies.

Auerswald quotes Kauffman and Levin:

“Natural selection does not work as an engineer works… It works like a tinkereer–a tinkerer who does not know exactly what he is going to produce but uses… everything at his disposal to produce some kind of workable object.” This process is progressive, moving form simpler to more complex forms: “Evolution doe not produce novelties from scratch. It works on what already exists, either transforming a system to give it new functions or combining several systems to produce a more elaborate one [as] during the passage from unicellular to multicellular forms.”

Further:

The Kauffman and Levin model was as simple as it was powerful. Imagine a genetic code of length N, where each gene might occupy one of two possible “states”–for example, “o” and “i” in a binary computer. The difficulty of the evolutionary problem was tunable with the parameter K, which represented the average number of interactions among genes. The NK model, as it came to be called, was able to reproduce a number of measurable features of evolution in biological systems. Evolution could be represented as a genetic walk on a fitness landscape, in which increasing complexity was now a central parameter.

You may remember my previous post on Local Optima, Diversity, and Patchwork:

Local optima–or optimums, if you prefer–are an illusion created by distance. A man standing on the hilltop at (approximately) X=2 may see land sloping downward all around himself and think that he is at the highest point on the graph. But hand him a telescope, and he discovers that the fellow standing on the hilltop at X=4 is even higher than he is. And hand the fellow at X=4 a telescope, and he’ll discover that X=6 is even higher.

A global optimum is the best possible way of doing something; a local optimum can look like a global optimum because all of the other, similar ways of doing the same thing are worse.

Some notable examples of cultures that were stuck at local optima but were able, with exposure, to jump suddenly to a higher optima: The “opening of Japan” in the late 1800s resulted in breakneck industrialization and rising standards of living; the Cherokee invented their own alphabet (technically a syllabary) after glimpsing the Roman one, and achieved mass literacy within decades; European mathematics and engineering really took off after the introduction of Hindu-Arabic numerals and the base-ten system.

If we consider each culture its own “landscape” in which people (and corporations) are finding locally optimal solutions to problems, then it becomes immediately obvious that we need both a large number of distinct cultures working out their own solutions to problems and occasional communication and feedback between those cultures so results can transfer. If there is only one, global, culture, then we only get one set of solutions–and they will probably be sub-optimal. If we have many cultures but they don’t interact, we’ll get tons of solutions, and many of them will be sub-optimal. But many cultures developing their own solutions and periodically interacting can develop many solutions and discard sub-optimal ones for better ones.

On a related note, Gore Burnelli writes: How Nassim Taleb changed my mind about religion:

Life constantly makes us take decisions under conditions of uncertainty. We can’t simply compute every possible outcome, and decide with perfect accuracy what the path forward is. We have to use heuristics. Religion is seen as a record of heuristics that have worked in the past. …

But while every generation faces new circumstances, there are also some common problems that every living being is faced with: survival and reproduction, and these are the most important problems because everything else depends on them. Mess with these, and everything else becomes irrelevant.

This makes religion an evolutionary record of solutions which persisted long enough, by helping those who held them to persist.

This is not saying “All religions are perfect and good and we should follow them,” but it is suggesting, “Traditional religions (and cultures) have figured out ways to solve common problems and we should listen to their ideas.”

From Ray Kurzweil

Back in The Code Economy, Auerswald asks:

Might the same model, derived from evolutionary biology, explain the evolution of technology?

… technology may also be nothing else but the capacity for invariant reproduction. However, in order for more complex forms of technology to be viable over time, technology also must possess a capacity for learning and adaptation.

Evolutionary theory as applied to the advance of code is the focus of the next chapter. Kauffman and Levin’s NK model ends up providing a framework for studying the creation and evolution of code. Learning curves act as the link between biology and economics.

Will the machines become sentient? Or McDonald’s? And which should we worry about?

Book Club: The Industrial Revolution and its Discontents, Code Economy, ch. 5

1. The Industrial Revolution and its consequences have been a disaster for the human race. They have greatly increased the life-expectancy of those of us who live in “advanced” countries, but they have destabilized society, have made life unfulfilling, have subjected human beings to indignities, have led to widespread psychological suffering (in the Third World to physical suffering as well) and have inflicted severe damage on the natural world. The continued development of technology will worsen the situation. It will certainly subject human beings to greater indignities and inflict greater damage on the natural world, it will probably lead to greater social disruption and psychological suffering, and it may lead to increased physical suffering even in “advanced” countries. –Kaczynski, Industrial Society and Its Future

The quest to find and keep a “job for life”–stable, predictable work that pays enough to live on, is reachable by available transportation, and lends a sense of meaning to their daily lives–runs though every interview transcript, from those who are unemployed to those who have “made it” to steady jobs like firefighting or nursing. Traditional blue-collar work–whether as a factory worker or a police officer–has become increasingly scarce and competitive, destroyed by a technologically advanced and global capitalism that prioritizes labor market “flexibility”… Consequently, the post-industrial generation is forced to continuously grapple with flux and contingency, bending and adapting to the demands of they labor market until they feel that they are about to break. –Silva, Coming up Short: Working-Class Adulthood in an Age of Uncertainty

The historical record confirms that the realities of the ongoing processes of mechanization and industrialization, as noted early on by Lord Byron, were very different from the picture adherents to the wage fund theory held in their heads. While the long-term impact of the Industrial revolution had on the health and well-being of the English population was strongly positive, the first half of the nineteenth century was indeed a time of exceptional hardship for English workers. In a study covering the years 1770-1815, Stephen Nicholas and Richard Steckel report “falling heights of urban-and rural-born males after 1780 and a delayed growth spurt for 13- to 23-year old boys,” as well as a fall in the English workers’ height relative to that of Irish convicts. By the 1830s, the life expectancy of anyone born in Liverpool and Manchester was less than 30 years–as low as had been experienced in England since the Black Death of 1348. –Auerswald, The Code Economy

On the other hand:

Chapter 5 of The Code Economy, Substitution, explores the development of economic theories about the effects of industrialization and general attempts at improving the lives of the working poor.

… John Barton, a Quaker, published a pamphlet in 1817 titled, Observations on the Circumstances Which Influence the Condition of the Laboring Classes of Society. … Barton began by targeting the Malthusian assumption that population grows in response to increasing wages. … He began by noting that there was no a priori reason to believe that labor and capital were perfect complements, as classical economists implicitly assumed. The more sensible assumption was that, as wages increased, manufacturers and farmers alike would tend to substitute animals or machines for human labor. Rather than increasing the birth rate, the higher wages brought on by the introduction of new machinery would increase intergenerational differences in income and thus delay child-bearing. Contrary to the Malthusian line of argument, this is exactly what happened.

There’s an end note that expands on this (you do read the end notes, right?) Quoting Barton, 1817:

A rise of wages then does not always increase population… For every rise of wages tends to decrease the effectual demand for labor… Suppose that by a general agreement among farmers the rate of agricultural wage were raised from 12 shillings to 24 shillings per week–I cannot imagine any circumstance calculated more effectually to discourage marriage. For it would immediately become a a most important object to cultivate with as few hands as possible; wherever the use of machinery, or employment of horses could be substituted for manual labor, it would be done; and a considerable portion of existing laborers would be out of work.

This is the “raising the minimum wage will put people out of work” theory. Barton also points out that when people do manage to get these higher-paid jobs, they will tend to be older, more experienced laborers rather than young folks looking to marry and start a family.

A quick perusal of minimum wage vs. unemployment rate graphs reveals some that are good evidence against minimum wages, and some that are good evidence in favor of them. Here’s a link to a study that found no effects of minimum wage differences on employment. The American minimum wage data is confounded by things like “DC is a shithole.” DC has the highest minimum wage in the country and the highest unemployment rate, but Hawaii also has a very high minimum wage and the lowest unemployment rate. In general, local minimum wages probably reflect local cost of living/cost of living reflects wages. If we adjust for inflation, minimum wage in the US peaked around 1968 and was generally high throughout the 60s and 70s, but has fallen since then. Based on conversations with my parents, I gather the 60s and 70s were a good time to be a worker, when unskilled labor could pretty easily get a job and support a family; unemployment rates do not seem to have fallen markedly since then, despite lower real wages. A quick glance at a map of minimum wages by country reveals that countries with higher minimum wage tend to be nicer countries that people actually want to live in, but the relationship is not absolute.

We might say that this contradicts Barton, but why have American wages stagnated or gone down since the 60s?

1. Automation

2. Emergence of other economic competitors as Europe and Japan recovered from WWII

3. Related: Outsourcing to cheaper workers in China

4. Labor market growth due to entry of women, immigrants, and Boomers generally

Except for 2, that sounds a lot like what Barton said would happen. Wages go up => people move where the good jobs are => labor market expands => wages go down. If labor cannot move, then capitalists can either move the businesses to the labor or invest in machines to replace the labor.

On the other hand, the standard of living is clearly higher today than it was in 1900, even if wages, like molecules diffusing through the air, tend to even out over time. Why?

First, obviously, we learned to extract more energy from sources like oil, coal, and nuclei. A loom hooked up (via the electrical grid) to an electric turbine can make a lot more cloth per hour than a mere human working with shuttles and thread.

Second, we have gotten better at using the energy we extract–Auerswald would call this “code.”

Standards of living may thus have more to do with available resources (including energy) and our ability to use those resources (both the ‘code” we have developed and our own inherent ability to interact with and use that code,) than with the head-scratching entropy of minimum wages.

Auerswald discusses the evolution of David Ricardo’s economic ideas:

By incorporating the potential for substitution between capital and labor, Ricardo led the field of economics in rejecting the wage fund theory, along with its Dickensian implications for policy. He accepted the notion the introduction of new machinery would result in the displacement of workers. The upshot was that the workers were still assumed to be doomed, but the reason was now substitution of machines for labor, not scarcity of a Malthusian variety.

Enter Henry George, with a radically different perspective:

“Like a flash it came over me that there was the reason of advancing poverty with advancing wealth. With the growth of population, land grows in value, and the men who work it must pay more for the privilege.” …

George asserted that increasing population density, (not, as Malthus claimed, population decline) was the source of increased prosperity in human societies: “Wealth is greatest where population is densest… the production of wealth to a given amount of labor increases as population increases.”  The frequent interactions among people in densely populated cities accelerates the emergence and evolution of code. However, while population growth and increased density naturally bring increased prosperity, they also, just as naturally, bring increasing inequality and poverty. Why? Because the fruits of labor are inevitably gathered by the owners of land.

In other words, increasing wages => increasing rents and the workers are right back where they started while the landlords are sitting pretty.

In sharp contrast with Karl Marx, … George stated that “the antagonism of interest is not between labor and capital… but is in reality between labor and capital on the one side and the land ownership on the other.” The implication of his analysis was as simple as it was powerful: to avoid concentrating wealth in the hands of the few, it was the government’s responsibility to eliminate all taxes on capital and laborers, the productive elements of the economy, and to replace those taxes with a single tax on land.

Note: not a flat tax on land, but a tax relative to the land’s sale value.

I was glad to see Henry George in the book because I enjoy George’s theories and they are under-discussed, especially relative to Marxism. You will find massive online communities of Marxists despite the absolute evidence that Marxism is a death machine, but relatively few enthusiastic Georgists. One of the things I rather appreciate about Georgism is its simplicity; the complication of the tax code is its own, additional burden on capitalists and workers alike. Almost any simplified tax code, no matter how “unfair,” would probably improve maters a great deal.

But there’s more, because this is a dense chapter. Auerswald notes that the increasing complexity of code (ie, productivity) has lead to steadily increasing standards of living over the past two centuries, at least after the Industrial Revolution’s initial cataclysm.

Quoting economist Paul Douglas, some years later:

“The increased use of mechanical appliances in offices has tended to lower the skill required. An old-fashioned bookkeeper, for instance, had to write a good hand, he had to be able to multiply and divide with absolute accuracy. Today his place is taken by a girl who  operates a book-keeping machine, and it has taken her a few weeks at mot to become a skilled bookkeeper.” In other words, the introduction of machinery displaced skilled workers for the very same reason it enhanced human capabilities: it allowed a worker with relatively rudimentary training to perform tasks that previously required a skilled worker.

…”Another way of looking at it, is this: Where formerly the skill used in bookkeeping was exercised by the bookkeeper, today that skill is exercised by the factory employees who utilize it to manufacture a machine which can do the job of keeping books, when operated by someone of skill far below that of the former bookkeeper. And because of this transfer of skill form the office to the factory, the rewards of skill are likewise transferred to the wage-earner at the plant.”

This is a vitally important pint… The essence of this insight is that introducing more powerful machines into the workplace does more than simply encode  into the machine the skills or capabilities that previously resided only in humans; it also shifts the burden of skill from one domain of work to another. … A comparable shift in recent decades has been from the skill of manufacturing computing machines (think IBM or Dell in their heydays) to that of creating improved instructions for computing machines’ the result has been a relative growth in programmers’ wages. The underlying process is the same. Improvements in technology will predictably reduce demand for the skills held by some workers, but they also will enhance the capabilities of other workers and shift the requirements of skill from one domain of work to the other.”

The problem with this is that the average person puts in 15-20 years of schooling (plus $$$) in order to become skilled at a job, only to suddenly have that job disappear due to accelerating technological change/improvement, and then some asshole one comes and tells them they should just “learn to code” spend another two to four years unemployed and paying for the privilege of learning another job and don’t see how fucking dispiriting this is to the already struggling.

The struggle for society is recognizing that even as standards of living may be generally rising, some people may absolutely be struggling with an economic system that offers much less certainty and stability than our ancestors enjoyed.

A final word from Auerswald:

… work divides or “bifurcates” as code advances in a predictable and repeatable way. The bifurcation of work in a critical mechanism by which the advance of code yields improvements in human well-being at the same time as it increases human reliance on code.

Book Club: The Code [Robot] Economy (pt. 2)

Welcome to EvX’s book club. Today we’re discussing Philip Auerswald’s The Code Economy, Introduction.

I’ve been discussing the robot economy for years (though not necessarily via the blog.) What happens when robots take over most of the productive jobs? Most humans were once involved in directly producing the necessities of human life–food, clothing, and shelter, but mostly food. Today, machines have eliminated most food and garment production jobs. One tractor easily plows many more acres in a day than a horse or mule team did in the 1800s, allowing one man to produce as much food as dozens (or hundreds) once did.

What happened to those ex-farmers? Most of us are employed in new professions that didn’t exist (eg, computer specialist) or barely existed (health care), but there are always those who can’t find employment–and unemployment isn’t evenly distributed.

Black unemployment rate

Since 1948, the overall employment rate has rarely exceeded 7.5%; the rate for whites has been slightly lower. By contrast, the black unemployment rate has rarely dipped below 10% (since 1972, the best data I have.) The black unemployment rate has only gone below 7.5 three times–for one month in 1999, one month in 2000, and since mid-2017. 6.6% in April, 2018 is the all-time low for black unemployment. (The white record, 3.0%, was set in the ’60s.)

(As Auerswald points out, “unemployment” was a virtually unknown concept in the Medieval economy, where social station automatically dictated most people’s jobs for life.)

Now I know the books are cooked and “unemployment” figures are kept artificially low by shunting many of the unemployed into the ranks of the officially “disabled,” who aren’t counted in the statistics, but no matter how you count the numbers, blacks struggle to find jobs at the same rates as whites–a problem they didn’t face in the pre-industrial, agricultural economy (though that economy caused suffering in its own way.)

A quick glance at measures of black and white educational attainment explains most of the employment gap–blacks graduate from school at lower rates, are less likely to earn a college degree, and overall have worse SAT/ACT scores. In an increasingly “post-industrial,” knowledge-based economy where most unskilled labor can be performed by robots, what happens to unskilled humans?

What happens when all of the McDonald’s employees have been replaced by robots and computers? When even the advice given by lawyers and accountants can be more cheaply delivered by an app on your smartphone? What if society, eventually, doesn’t need humans to perform most jobs?

Will most people simply be unemployed, ruled over by the robot-owning elite and the lucky few who program the robots? Will new forms of work we haven’t even begun to dream of emerge? Will we adopt some form of universal basic income, or descend into neo-feudalism? Will we have a permanent underclass of people with no hope of success in the current economy, either despairing at their inability to live successful lives or living slothfully off the efforts of others?

Here lies the crux of Auerswald’s thesis. He provides four possible arguments for how the “advance of code” (ie, the accumulation of technological knowledge and innovation,) could turn out for humans.

The Rifkin View:

  1. The power of code is growing at an exponential rate.
  2. Code nearly perfectly substitutes for human capabilities.
  3. Therefore the (relative) power of human capabilities is shrinking at an exponential rate.

If so, we should be deeply worried.

The Kurzweil View:

  1. The power of code is growing at an exponential rate.
  2. Code nearly perfectly complements human capabilities.
  3. Therefore the (absolute) power of human capabilities is growing at an exponential rate.

If so, we may look forward to the cyborg singularity

The Auerswald View:

  1. The power of code is growing at an exponential rate [at least we all agree on something.]
  2. Code only partially substitutes for human capabilities.
  3. Therefore the (relative) power of human capabilities is shrinking at an exponential rate in those categories of work that can be performed by computers, but not in others.

Auerswald notes:

In other words, where Kurzweil talks about an impeding code-induced Singularity, the reality looks much more like one code-induced bifurcation–the division of labor between humans and machines–after another.

The answer to the question, “Is there anything that humans can do better than digital computers?” turns out to be fairly simple: humans are better at being human.

Further:

1. Creating and improving code is a key part of what we human beings do. It’s how we invent the future by building on the past.

2. The evolution of the economy is driven by the advance of code. Understanding this advance is therefore fundamental to economics, and to much of human history.

3. When we create and advance code we don’t just invent new toys, we produce new forms of meaning, new experiences, and new ways of making our way in the world.

What do you think?

When did language evolve?

The smartest non-human primates, like Kanzi the bonobo and Koko the gorilla, understand about 2,000 to 4,000 words. Koko can make about 1,000 signs in sign language and Kanzi can use about 450 lexigrams (pictures that stand for words.) Koko can also make some onomatopoetic words–that is, she can make and use imitative sounds in conversation.

A four year human knows about 4,000 words, similar to an exceptional gorilla. An adult knows about 20,000-35,000 words. (Another study puts the upper bound at 42,000.)

Somewhere along our journey from ape-like hominins to homo sapiens sapiens, our ancestors began talking, but exactly when remains a mystery. The origins of writing have been amusingly easy to discover, because early writers were fond of very durable surfaces, like clay, stone, and bone. Speech, by contrast, evaporates as soon as it is heard–leaving no trace for archaeologists to uncover.

But we can find the things necessary for speech and the things for which speech, in turn, is necessary.

The main reason why chimps and gorillas, even those taught human language, must rely on lexigrams or gestures to communicate is that their voiceboxes, lungs, and throats work differently than ours. Their semi-arborial lifestyle requires using the ribs as a rigid base for the arm and shoulder muscles while climbing, which in turn requires closing the lungs while climbing to provide support for the ribs.

Full bipedalism released our early ancestors from the constraints on airway design imposed by climbing, freeing us to make a wider variety of vocalizations.

Now is the perfect time to break out my file of relevant human evolution illustrations:

Source: Scientific American What Makes Humans Special

We humans split from our nearest living ape relatives about 7-8 million years ago, but true bipedalism may not have evolved for a few more million years. Since there are many different named hominins, here is a quick guide:

Source: Macroevolution in and Around the Hominin Clade

Australopithecines (light blue in the graph,) such as the famous Lucy, are believed to have been the first fully bipedal hominins, although, based on the shape of their toes, they may have still occasionally retreated into the trees. They lived between 4 and 2 million years ago.

Without delving into the myriad classification debates along the lines of “should we count this set of skulls as a separate species or are they all part of the natural variation within one species,” by the time the homo genus arises with H Habilis or H. Rudolfensis around 2.8 million years ag, humans were much worse at climbing trees.

Interestingly, one direction humans have continued evolving in is up.

Oldowan tool

The reliable production of stone tools represents an enormous leap forward in human cognition. The first known stone tools–Oldowan–are about 2.5-2.6 million years old and were probably made by homo Habilis. These simple tools are typically shaped only one one side.

By the Acheulean–1.75 million-100,000 years ago–tool making had become much more sophisticated. Not only did knappers shape both sides of both the tops and bottoms of stones, but they also made tools by first shaping a core stone and then flaking derivative pieces from it.

The first Acheulean tools were fashioned by h Erectus; by 100,000 years ago, h Sapiens had presumably taken over the technology.

Flint knapping is surprisingly difficult, as many an archaeology student has discovered.

These technological advances were accompanied by steadily increasing brain sizes.

I propose that the complexities of the Acheulean tool complex required some form of language to facilitate learning and teaching; this gives us a potential lower bound on language around 1.75 million years ago. Bipedalism gives us an upper bound around 4 million years ago, before which our voice boxes were likely more restricted in the sounds they could make.

A Different View

Even though “homo Sapiens” has been around for about 300,000 years (or so we have defined the point where we chose to differentiate between our species and the previous one,) “behavioral modernity” only emerged around 50,000 years ago (very awkward timing if you know anything about human dispersal.)

Everything about behavioral modernity is heavily contested (including when it began,) but no matter how and when you date it, compared to the million years or so it took humans to figure out how to knap the back side of a rock, human technologic advance has accelerated significantly over the past 100,000 and even moreso over the past 50,000 and even 10,000.

Fire was another of humanity’s early technologies:

Claims for the earliest definitive evidence of control of fire by a member of Homo range from 1.7 to 0.2 million years ago (Mya).[1] Evidence for the controlled use of fire by Homo erectus, beginning some 600,000 years ago, has wide scholarly support.[2][3] Flint blades burned in fires roughly 300,000 years ago were found near fossils of early but not entirely modern Homo sapiens in Morocco.[4] Evidence of widespread control of fire by anatomically modern humans dates to approximately 125,000 years ago.[5]

What prompted this sudden acceleration? Noam Chomsky suggests that it was triggered by the evolution of our ability to use and understand language:

Noam Chomsky, a prominent proponent of discontinuity theory, argues that a single chance mutation occurred in one individual in the order of 100,000 years ago, installing the language faculty (a component of the mind–brain) in “perfect” or “near-perfect” form.[6]

(Pumpkin Person has more on Chomsky.)

More specifically, we might say that this single chance mutation created the capacity for figurative or symbolic language, as clearly apes already have the capacity for very simple language. It was this ability to convey abstract ideas, then, that allowed humans to begin expressing themselves in other abstract ways, like cave painting.

I disagree with this view on the grounds that human groups were already pretty widely dispersed by 100,000 years ago. For example, Pygmies and Bushmen are descended from groups of humans who had already split off from the rest of us by then, but they still have symbolic language, art, and everything else contained in the behavioral modernity toolkit. Of course, if a trait is particularly useful or otherwise successful, it can spread extremely quickly (think lactose tolerance,) and neither Bushmen nor Pygmies were 100% genetically isolated for the past 250,000 years, but I simply think the math here doesn’t work out.

However, that doesn’t mean Chomsky isn’t on to something. For example, Johanna Nichols (another linguist,) used statistical models of language differentiation to argue that modern languages split around 100,000 years ago.[31] This coincides neatly with the upper bound on the Out of Africa theory, suggesting that Nichols may actually have found the point when language began differentiating because humans left Africa, or perhaps she found the origin of the linguistic skills necessary to accomplish humanity’s cross-continental trek.

Philip Lieberman and Robert McCarthy looked at the shape of Neanderthal, homo Erectus, early h Sapiens and modern h Sapiens’ vocal tracts:

In normal adults these two portions of the SVT form a right angle to one another and are approximately equal in length—in a 1:1 proportion. Movements of the tongue within this space, at its midpoint, are capable of producing tenfold changes in the diameter of the SVT. These tongue maneuvers produce the abrupt diameter changes needed to produce the formant frequencies of the vowels found most frequently among the world’s languages—the “quantal” vowels [i], [u], and [a] of the words “see,” “do,” and “ma.” In contrast, the vocal tracts of other living primates are physiologically incapable of producing such vowels.

(Since juvenile humans are shaped differently than adults, they pronounce sounds slightly differently until their voiceboxes fully develop.)

Their results:

…Neanderthal necks were too short and their faces too long to have accommodated equally proportioned SVTs. Although we could not reconstruct the shape of the SVT in the Homo erectus fossil because it does not preserve any cervical vertebrae, it is clear that its face (and underlying horizontal SVT) would have been too long for a 1:1 SVT to fit into its head and neck. Likewise, in order to fit a 1:1 SVT into the reconstructed Neanderthal anatomy, the larynx would have had to be positioned in the Neanderthal’s thorax, behind the sternum and clavicles, much too low for effective swallowing. …

Surprisingly, our reconstruction of the 100,000-year-old specimen from Israel, which is anatomically modern in most respects, also would not have been able to accommodate a SVT with a 1:1 ratio, albeit for a different reason. … Again, like its Neanderthal relatives, this early modern human probably had an SVT with a horizontal dimension longer than its vertical one, translating into an inability to reproduce the full range of today’s human speech.

It was only in our reconstruction of the most recent fossil specimens—the modern humans postdating 50,000 years— that we identified an anatomy that could have accommodated a fully modern, equally proportioned vocal tract.

Just as small children who can’t yet pronounce the letter “r” can nevertheless make and understand language, I don’t think early humans needed to have all of the same sounds as we have in order to communicate with each other. They would have just used fewer sounds.

The change in our voiceboxes may not have triggered the evolution of language, but been triggered by language itself. As humans began transmitting more knowledge via language, humans who could make more sounds could utter a greater range of words perhaps had an edge over their peers–maybe they were seen as particularly clever, or perhaps they had an easier time organizing bands of hunters and warriors.

One of the interesting things about human language is that it is clearly simultaneously cultural–which language you speak is entirely determined by culture–and genetic–only humans can produce language in the way we do. Even the smartest chimps and dolphins cannot match our vocabularies, nor imitate our sounds. Human infants–unless they have some form of brain damage–learn language instinctually, without conscious teaching. (Insert reference to Steven Pinker.)

Some kind of genetic changes were obviously necessary to get from apes to human language use, but exactly what remains unclear.

A variety of genes are associated with language use, eg FOXP2. H Sapiens and chimps have different versions of the FOXP2 gene, (and Neanderthals have a third, but more similar to the H Sapiens version than the chimp,) but to my knowledge we have yet to discover exactly when the necessary mutations arose.

Despite their impressive skulls and survival in a harsh, novel climate, Neanderthals seem not to have engaged in much symbolic activity, (though to be fair, they were wiped out right about the time Sapiens really got going with its symbolic activity.) Homo Sapiens and Homo Nanderthalis split around 800-400,000 years ago–perhaps the difference in our language genes ultimately gave Sapiens the upper hand.

Just as farming appears to have emerged relatively independently in several different locations around the world at about the same time, so behavioral modernity seems to have taken off in several different groups around the same time. Of course we can’t rule out the possibility that these groups had some form of contact with each other–peaceful or otherwise–but it seems more likely to me that similar behaviors emerged in disparate groups around the same time because the cognitive precursors necessary for those behaviors had already begun before they split.

Based on genetics, the shape of their larynges, and their cultural toolkits, Neanderthals probably did not have modern speech, but they may have had something similar to it. This suggests that at the time of the Sapiens-Neanderthal split, our common ancestor possessed some primitive speech capacity.

By the time Sapiens and Neanderthals encountered each other again, nearly half a million years later, Sapiens’ language ability had advanced, possibly due to further modification of FOXP2 and other genes like it, plus our newly modified voiceboxes, while Neanderthals’ had lagged. Sapiens achieved behavioral modernity and took over the planet, while Neanderthals disappeared.

 

The Value of Viral Memes

Note: “Memes” on this blog is used as it is in the field of memetics, representing units of ideas that are passed from person to person, not in the sense of “funny cat pictures on the internet.”

“Mitochondrial memes” are memes that are passed vertically from parent to child, like “it’s important to eat your dinner before desert” or “brush your teeth twice a day or your teeth will rot out.”

“Meme viruses” (I try to avoid the confusing phrase, “viral memes,”) are memes that are transmitted horizontally through society, like chain letters and TV news.

I’ve spent a fair amount of time warning about some of the potential negative results of meme viruses, but today I’d like to discuss one of their greatest strengths: you can transmit them to other people without using them yourself.

Let’s start with genetics. It is very easy to quickly evolve in a particular direction if a variety of relevant traits already exist in a population. For example, humans already vary in height, so if you wanted to, say, make everyone on Earth shorter, you would just have to stop all of the tall people from reproducing. The short people would create the next generation, and it would be short.

But getting the adult human height below 3″ tall requires not just existing, normal human height variation, but exploiting random mutations. These are rare and the people who have them normally incur huge reductions in fitness, as they often have problems with bone growth, intelligence, and giving birth.

Most random mutations simply result in an organism’s death. Very few are useful, and those that are have to beat out all of the other local genetic combinations to actually stick around.

Suppose you happen to be born with a very lucky genetic trait: a rare mutation that lets you survive more easily in an arctic environment.

But you were born in Sudan.

Your genetic trait could be really useful if you could somehow give it away to someone in Siberia, but no, you are stuck in Sudan and you are really hot all of the time and then you die of heatstroke.

With the evolution of complex thought, humans (near alone among animals) developed the ability to go beyond mere genetic abilities, instincts, and impulses, and impart stores of knowledge to the next generation. Humanity has been accumulating mitochondrial memes for millions of years, ever since the first human showed another human how to wield fire and create stone tools. (Note: the use of fire and stone tools predates the emergence of homo Sapiens by a long while, but not the Homo genus.)

But mitochondrial memes, to get passed on, need to offer some immediate benefit to their holders. Humans are smart enough–and the utility of information unpredictable enough–that we can hold some not obviously useful or absurd ideas, but the bulk of our efforts have to go toward information that helps us survive.

(By definition, mitochondrial memes aren’t written down; they have to be remembered.)

If an idea doesn’t offer some benefit to its holder, it is likely to be quickly forgotten–even if it could be very useful to someone else.

Suppose one day you happen to have a brilliant new idea for how to keep warm in a very cold environment–but you live in Sudan. If you can’t tell your idea to anyone who lives somewhere cold, your idea will never be useful. It will die with you.

But introduce writing, and ideas of no use to their holder can be recorded and transmitted to people who can use them. For example, in 1502, Leonardo da Vinci designed a 720-foot (220 m) bridge for Ottoman Sultan Beyazid II of Constantinople. The sultan never built Leonardo’s bridge, but in 2001, a bridge based on his design was finally built in Norway. Leonardo’s ideas for flying machines, while also not immediately useful, inspired generations of future engineers.

Viral memes don’t have to be immediately useful to stick around. They can be written down, tucked into a book, and picked up again a hundred years later and a thousand miles away by someone who can use them. A person living in Sudan can invent a better way to stay warm, write it down, and send it to someone in Siberia–and someone in Siberia can invent a better way to stay cool, write it down, and send it back.

Original Morse Telegraph machine, circa 1835

Many modern scientific and technological advances are based on the contributions of not one or two or ten inventors, but thousands, each contributing their unpredictable part to the overall whole. Electricity, for example, was a mere curiosity when Thales of Miletus wrote about effects of rubbing amber to produce static electricity (the word “electricity” is actually derived from the Greek for “amber”;) between 1600 and 1800, scientists began studying electricity in a more systematic way, but it still wasn’t useful. It was only with the invention of the telegraph from many different electrical parts and systems, (first working model, 1816; first telegram sent in the US, 1838;) that electricity became useful. With the invention of electric lights and the electrical grids necessary to power them (1870s and 80s,) electricity moved into people’s homes.

The advent of meme viruses has thus given humanity two gifts: 1. People can use technology like books and the internet to store more information than we can naturally, like external hard-drives for our brains; and 2. we can preserve and transmit ideas that aren’t immediately useful to ourselves to people who can use them.