Book Club Pick: The 10,000 Year Explosion

5172bf1dp2bnl-_sx323_bo1204203200_Our next Book Club pick is Cochran and Harpending’s The 10,000 Year Explosion: How Civilization Accelerated Human Evolution. From the book’s description on Amazon:

Scientists have long believed that the “great leap forward” that occurred some 40,000 to 50,000 years ago in Europe marked end of significant biological evolution in humans. In this stunningly original account of our evolutionary history, top scholars Gregory Cochran and Henry Harpending reject this conventional wisdom and reveal that the human species has undergone a storm of genetic change much more recently. Human evolution in fact accelerated after civilization arose, they contend, and these ongoing changes have played a pivotal role in human history. They argue that biology explains the expansion of the Indo-Europeans, the European conquest of the Americas, and European Jews’ rise to intellectual prominence. …

I just received the book, so I haven’t read it yet, but I’ve been a big fan of Greg and Henry’s blog (now Greg’s blog, since Henry passed away,) for a long time. I expect to finish reading and get the relevant discussion posts up, therefore, in about two months–I’ll update the time frame as we get closer.

Please let me know if you prefer short form discussion (like our discussion of Kurzweil’s How to Build a Mind,) or long form discussion (like Auerswald’s The Code Economy,) or something in between.

Advertisements

Book Club: How to Create a Mind, pt 2/2

Ray Kurzweil, writer, inventor, thinker

Welcome back to EvX’s Book Club. Today  are finishing Ray Kurzweil’s How to Create a Mind: The Secret of Human thought Revealed.

Spiders are interesting, but Kurzweil’s focus is computers, like Watson, which trounced the competition on Jeopardy!

I’ll let Wikipedia summarize Watson:

Watson was created as a question answering (QA) computing system that IBM built to apply advanced natural language processing, information retrieval, knowledge representation, automated reasoning, and machine learning technologies to the field of open domain question answering.[2]

The sources of information for Watson include encyclopedias, dictionaries, thesauri, newswire articles, and literary works. Watson also used databases, taxonomies, and ontologies. …

Watson parses questions into different keywords and sentence fragments in order to find statistically related phrases.[22] Watson’s main innovation was not in the creation of a new algorithm for this operation but rather its ability to quickly execute hundreds of proven language analysis algorithms simultaneously.[22][24] The more algorithms that find the same answer independently the more likely Watson is to be correct.[22] Once Watson has a small number of potential solutions, it is able to check against its database to ascertain whether the solution makes sense or not.[22]

Kurzweil opines:

That is at least one reason why Watson represents such a significant milestone: Jeopardy! is precisely such a challenging language task. … What is perhaps not evident to many observers is that Watson not only had to master the language in the unexpected and convoluted queries, but for the most part its knowledge was not hand-coded. It obtained that knowledge by actually reading 200 million pages of natural-language documents, including all of Wikipedia… If Watson can understand and respond to questions based on 200 million pages–in three seconds!–here is nothing to stop similar systems from reading the other billions of documents on the Web. Indeed, that effort is now under way.

A point about the history of computing that may be petty of me to emphasize:

Babbage’s conception is quite miraculous when you consider the era in which he lived and worked. However, by the mid-twentieth century, his ideas had been lost in the mists of time (although they were subsequently rediscovered.) It was von Neumann who conceptualized and articulated the key principles of the computer as we know it today, and the world recognizes this by continuing to refer to the von Neumann machine as the principal model of computation. Keep in mind, though, that the von Neumann machine continually communicates data between its various units and within those units, so it could not be built without Shannon’s theorems and the methods he devised for transmitting and storing reliable digital information. …

You know what? No, it’s not petty.

Amazon lists 57 books about Ada Lovelace aimed at children, 14 about Alan Turing, and ZERO about John von Neumann.

(Some of these results are always irrelevant, but they are roughly correct.)

“EvX,” you may be saying, “Why are you counting children’s books?”

Because children are our future, and the books that get published for children show what society deems important for children to learn–and will have an effect on what adults eventually know.

I don’t want to demean Ada Lovelace’s role in the development of software, but surely von Neumann’s contributions to the field are worth a single book!

*Slides soapbox back under the table*

Anyway, back to Kurzweil, now discussing quantum mechanics:

There are two ways to view the questions we have been considering–converse Western an Easter perspective on the nature of consciousness and of reality. In the Western perspective, we start with a physical world that evolves patterns of information. After a few billion years of evolution, the entities in that world have evolved sufficiently to become conscious beings In the Eastern view, consciousness is the fundamental reality, the physical word only come into existence through the thoughts of conscious beings. …

The East-West divide on the issue of consciousness has also found expression in opposing schools of thought in the field of subatomic physics. In quantum mechanics, particles exist in what are called probability fields. Any measurement carried out on them by a measuring device causes what is called a collapse of the wave function, meaning that the particle suddenly assumes a particular location. A popular view is that such a measurement constitutes observation by a conscious observer… Thus the particle assume a particular location … only when it is observed. Basically particles figure that if no one is bothering to look at them, they don’t need to decide where they are. I call this the Buddhist school of quantum mechanics …

Niels Bohr

Or as Niels Bohr put it, “A physicist is just an atom’s way of looking at itself.” He also claimed that we could describe electrons exercised free will in choosing their positions, a statement I do not think he meant literally; “We must be clear that when it comes to atoms, language can be used only as in poetry,” as he put it.

Kurzweil explains the Western interpretation of quantum mechanics:

There is another interpretation of quantum mechanics… In this analysis, the field representing a particle is not a probability field, but rather just a function that has different values in different locations. The field, therefore, is fundamentally what the particle is. … The so-called collapse of the wave function, this view holds, is not a collapse at all. … It is just that a measurement device is also made up of particles with fields, and the interaction of the particle field being measured and the particle fields of the measuring device result in a reading of the particle being in a particular location. The field, however, is still present. This is the Western interpretation of quantum mechanics, although it is interesting to note that the more popular view among physicists worldwide is what I have called the Eastern interpretation.

Soviet atomic bomb, 1951

For example, Bohr has the yin-yang symbol on his coat of arms, along with the motto contraria sunt complementa, or contraries are complementary. Oppenheimer was such a fan of the Bhagavad Gita that he read it in Sanskrit and quoted it upon successful completion of the Trinity Test, “If the radiance of a thousand suns were to burst at once into the sky, that would be like the splendor of the mighty one,” and “Now I am become death, the destroyer of worlds.” He credited the Gita as one of the most important books in his life.

Why the appeal of Eastern philosophy? Is it something about physicists and mathematicians? Leibnitz, after all, was fond of the I Ching. As Wikipedia says:

Leibniz was perhaps the first major European intellectual to take a close interest in Chinese civilization, which he knew by corresponding with, and reading other works by, European Christian missionaries posted in China. Having read Confucius Sinarum Philosophus on the first year of its publication,[153] he concluded that Europeans could learn much from the Confucian ethical tradition. He mulled over the possibility that the Chinese characters were an unwitting form of his universal characteristic. He noted with fascination how the I Ching hexagrams correspond to the binary numbers from 000000 to 111111, and concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophical mathematics he admired.[154] Leibniz communicated his ideas of the binary system representing Christianity to the Emperor of China hoping it would convert him.[84] Leibniz may be the only major Western philosopher who attempted to accommodate Confucian ideas to prevailing European beliefs.[155]

Leibniz’s attraction to Chinese philosophy originates from his perception that Chinese philosophy was similar to his own.[153] The historian E.R. Hughes suggests that Leibniz’s ideas of “simple substance” and “pre-established harmony” were directly influenced by Confucianism, pointing to the fact that they were conceived during the period that he was reading Confucius Sinarum Philosophus.[153]

Perhaps it is just that physicists and mathematicians are naturally curious people, and Eastern philosophy is novel to a Westerner, or perhaps by adopting Eastern ideas, they were able to purge their minds of earlier theories of how the universe works, creating a blank space in which to evaluate new data without being biased by old conceptions–or perhaps it is just something about the way their minds work.

As for quantum, I favor the de Broglie-Bohm interpretation of quantum mechanics, but obviously I am not a physicist and my opinion doesn’t count for much. What do you think?

But back to the book. If you are fond of philosophical ruminations on the nature of consciousness, like “What if someone who could only see in black and white read extensively about color “red,” could they ever achieve the qualia of actually seeing the color red?” or “What if a man were locked in a room with a perfect Chinese rulebook that told him which Chinese characters to write in response to any set of characters written on notes passed under the door? The responses are be in perfect Chinese, but the man himself understands not a word of Chinese,” then you’ll enjoy the discussion. If you already covered all of this back in Philosophy 101, you might find it a bit redundant.

Kurzweil notes that conditions have improved massively over the past century for almost everyone on earth, but people are increasingly anxious:

A primary reason people believe life is getting worse is because our information about the problems of the world has steadily improved. If there is a battle today somewhere on the planet, we experience it almost as if we were there. During World War II, tens of thousand of people might perish in a battle, and if the public could see it at all it was in a grainy newsreel in a movie theater weeks later. During World War I a small elite could read about the progress of the conflict in the newspaper (without pictures.) During the nineteenth century there was almost no access to news in a timely fashion for anyone.

As for the future of man, machines, and code, Kurzweil is even more optimistic than Auerswald:

The last invention that biological evolution needed to make–the neocortex–is inevitably leading to the last invention that humanity needs to make–truly intelligent machines–and the design of one is inspiring the other. … by the end of this century we will be able to create computation at the limits of what is possible, based on the laws of physics… We call matter and energy organized in this way “computronium” which is vastly more powerful pound per pound than the human brain. It will not jut be raw computation but will be infused with intelligent algorithms constituting all of human-machine knowledge. Over time we will convert much of the mass and energy in our tiny corner of the galaxy that is suitable for this purpose to computronium. … we will need to speed out to the rest of the galaxy and universe. …

How long will it take for us to spread our intelligence in its nonbiological form throughout the universe? … waking up the universe, and then intelligently deciding its fate by infusing it with our human intelligence in its nonbiological form, is our destiny.

Whew! That is quite the ending–and with that, so will we. I hope you enjoyed the book. What did you think of it? Will Humanity 2.0 be good? Bad? Totally different? Or does the Fermi Paradox imply that Kurzweil is wrong? Did you like this shorter Book Club format? And do you have any ideas for our next Book Club pick?

Book Club: How to Create a Mind by Ray Kurzweil pt 1/2

Welcome to our discussion of Ray Kurzweil’s How to Create a Mind: The Secret of Human thought Revealed. This book was requested by one my fine readers; I hope you have enjoyed it.

If you aren’t familiar with Ray Kurzweil (you must be new to the internet), he is a computer scientist, inventor, and futurist whose work focuses primarily on artificial intelligence and phrases like “technological singularity.”

Wikipedia really likes him.

The book is part neuroscience, part explanations of how various AI programs work. Kurzweil uses models of how the brain works to enhance his pattern-recognition programs, and evidence from what works in AI programs to build support for theories on how the brain works.

The book delves into questions like “What is consciousness?” and “Could we recognize a sentient machine if we met one?” along with a brief history of computing and AI research.

My core thesis, which I call the Law of Accelerating Returns, (LOAR), is that fundamental measures of of information technology follow predictable and exponential trajectories…

I found this an interesting sequel to Auerswald’s The Code Economy and counterpart to Gazzaniga’s Who’s In Charge? Free Will and the Science of the Brain, which I listened to in audiobook form and therefore cannot quote very easily. Nevertheless, it’s a good book and I recommend it if you want more on brains.

The quintessential example of the law of accelerating returns is the perfectly smooth, doubly exponential growth of the price/performance of computation, which has held steady for 110 years through two world was, the Great Depression, the Cold War, the collapse of the Soviet Union, the reemergence of China, the recent financial crisis, … Some people refer to this phenomenon as “Moore’s law,” but… [this] is just one paradigm among many.

From Ray Kurzweil,

Auerswald claims that the advance of “code” (that is, technologies like writing that allow us to encode information) has, for the past 40,000 years or so, supplemented and enhanced human abilities, making our lives better. Auerswald is not afraid of increasing mechanization and robotification of the economy putting people out of jobs because he believes that computers and humans are good at fundamentally different things. Computers, in fact, were invented to do things we are bad at, like decode encryption, not stuff we’re good at, like eating.

The advent of computers, in his view, lets us concentrate on the things we’re good at, while off-loading the stuff we’re bad at to the machines.

Kurzweil’s view is different. While he agrees that computers were originally invented to do things we’re bad at, he also thinks that the computers of the future will be very different from those of the past, because they will be designed to think like humans.

A computer that can think like a human can compete with a human–and since it isn’t limited in its processing power by pelvic widths, it may well out-compete us.

But Kurzweil does not seem worried:

Ultimately we will create an artificial neocortex that has the full range and flexibility of its human counterpart. …

When we augment our own neocortex with a synthetic version, we won’t have to worry about how much additional neocortex can physically fit into our bodies and brains, as most of it will be in the cloud, like most of the computing we use today. I estimated earlier that we have on the order of 300 million pattern recognizers in our biological neocortex. That’s as much as could b squeezed into our skulls even with the evolutionary innovation of a large forehead and with the neocortex taking about 80 percent of the available space. As soon as we start thinking in the cloud, there will be no natural limits–we will be able to use billions or trillions of pattern recognizers, basically whatever we need. and whatever the law of accelerating returns can provide at each point in time. …

Last but not least, we will be able to back up the digital portion of our intelligence. …

That is kind of what I already do with this blog. The downside is that sometimes you people see my incomplete or incorrect thoughts.

On the squishy side, Kurzweil writes of the biological brain:

The story of human intelligence starts with a universe that is capable of encoding information. This was the enabling factor that allowed evolution to take place. …

The story of evolution unfolds with increasing levels of abstraction. Atoms–especially carbon atoms, which can create rich information structures by linking in four different directions–formed increasingly complex molecules. …

A billion yeas later, a complex molecule called DNA evolved, which could precisely encode lengthy strings of information and generate organisms described by these “programs”. …

The mammalian brain has a distinct aptitude not found in any other class of animal. We are capable of hierarchical thinking, of understanding a structure composed of diverse elements arranged in a pattern, representing that arrangement with a symbol, and then using that symbol as an element in a yet more elaborate configuration. …

I really want to know if squids or octopuses can engage in symbolic thought.

Through an unending recursive process we are capable of building ideas that are ever more complex. … Only Homo sapiens have a knowledge base that itself evolves, grow exponentially, and is passe down from one generation to another.

Kurzweil proposes an experiment to demonstrate something of how our brains encode memories: say the alphabet backwards.

If you’re among the few people who’ve memorized it backwards, try singing “Twinkle Twinkle Little Star” backwards.

It’s much more difficult than doing it forwards.

This suggests that our memories are sequential and in order. They can be accessed in the order they are remembered. We are unable to reverse the sequence of a memory.

Funny how that works.

On the neocortex itself:

A critically important observation about the neocortex is the extraordinary uniformity of its fundamental structure. … In 1957 Mountcastle discovered the columnar organization of the neocortex. … [In 1978] he described the remarkably unvarying organization of the neocortex, hypothesizing that it was composed of a single mechanism that was repeated over and over again, and proposing the cortical column as the basic unit. The difference in the height of certain layers in different region noted above are simply differences in the amount of interconnectivity that the regions are responsible for dealing with. …

extensive experimentation has revealed that there are in fact repeating units within each column. It is my contention that the basic unit is a pattern organizer and that this constitute the fundamental component of the neocortex.

As I read, Kurzweil’s hierarchical models reminded me of Chomsky’s theories of language–both Ray and Noam are both associated with MIT and have probably conversed many times. Kurzweil does get around to discussing Chomsky’s theories and their relationship to his work:

Language is itself highly hierarchical and evolved to take advantage of the hierarchical nature of the neocortex, which in turn reflects the hierarchical nature of reality. The innate ability of human to lean the hierarchical structures in language that Noam Chomsky wrote about reflects the structure of of the neocortex. In a 2002 paper he co-authored, Chomsky cites the attribute of “recursion” as accounting for the unique language faculty of the human species. Recursion, according to Chomsky, is the ability to put together small parts into a larger chunk, and then use that chunk as a part in yet another structure, and to continue this process iteratively In this way we are able to build the elaborate structure of sentences and paragraphs from a limited set of words. Although Chomsky was not explicitly referring here to brain structure, the capability he is describing is exactly what the neocortex does. …

This sounds good to me, but I am under the impression that Chomsky’s linguistic theories are now considered outdated. Perhaps that is only his theory of universal grammar, though. Any linguistics experts care to weigh in?

According to Wikipedia:

Within the field of linguistics, McGilvray credits Chomsky with inaugurating the “cognitive revolution“.[175] McGilvray also credits him with establishing the field as a formal, natural science,[176] moving it away from the procedural form of structural linguistics that was dominant during the mid-20th century.[177] As such, some have called him “the father of modern linguistics”.[178][179][180][181]

The basis to Chomsky’s linguistic theory is rooted in biolinguistics, holding that the principles underlying the structure of language are biologically determined in the human mind and hence genetically transmitted.[182] He therefore argues that all humans share the same underlying linguistic structure, irrespective of sociocultural differences.[183] In adopting this position, Chomsky rejects the radical behaviorist psychology of B. F. Skinner which views the mind as a tabula rasa (“blank slate”) and thus treats language as learned behavior.[184] Accordingly, he argues that language is a unique evolutionary development of the human species and is unlike modes of communication used by any other animal species.[185][186] Chomsky’s nativist, internalist view of language is consistent with the philosophical school of “rationalism“, and is contrasted with the anti-nativist, externalist view of language, which is consistent with the philosophical school of “empiricism“.[187][174]

Anyway, back to Kuzweil, who has an interesting bit about love:

Science has recently gotten into the act as well, and we are now able to identify the biochemical changes that occur when someone falls in love. Dopamine is released, producing feelings of happiness and delight. Norepinephrin levels soar, which lead to a racing heart and overall feelings of exhilaration. These chemicals, along with phenylethylamine, produce elevation, high energy levels, focused attention, loss of appetite, and a general craving for the object of one’s desire. … serotonin level go down, similar to what happens in obsessive-compulsive disorder….

If these biochemical phenomena sound similar to those of the flight-or-fight syndrome, they are, except that we are running toward something or someone; indeed, a cynic might say toward rather than away form danger. The changes are also fully consistent with those of the early phase of addictive behavior. …  Studies of ecstatic religious experiences also show the same physical phenomena, it can be said that the person having such an experiences is falling in love with God or whatever spiritual connection on which they are focused. …

Religious readers care to weigh in?

Consider two related species of voles: the prairie vole and the montane vole. They are pretty much identical, except that the prairie vole has receptors for oxytocin and vasopressin, whereas the montane vole does not. The prairie vole is noted for lifetime monogamous relationships, while the montane vole resorts almost exclusively to one-night stands.

Learning by species:

A mother rat will build a nest for her young even if she has never seen another rat in her lifetime. Similarly, a spider will spin a web, a caterpillar will create her own cocoon, and a beaver will build a damn, even if no contemporary ever showed them how to accomplish these complex tasks. That is not to say that these are not learned behavior. It is just that he animals did not learn them in a single lifetime… The evolution of animal behavior does constitute a learning process, but it is learning by the species, not by the individual and the fruits of this leaning process are encoded in DNA.

I think that’s enough for today; what did you think? Did you enjoy the book? Is Kurzweil on the right track with his pattern recognizers? Are non-biological neocortexes on the horizon? Will we soon convert the solar system to computronium?

Let’s continue this discussion next Monday–so if you haven’t read the book yet, you still have a whole week to finish.

 

Book Club Announcement: How to Create a Mind

Next Book Club pick: How to Create a Mind: The Secret of Human Thought Revealed, by Ray Kurzweil. (This time we will be taking a different approach, and the discussion will be much shorter.)

From the Amazon blurb:

Ray Kurzweil is arguably today’s most influential—and often controversial—futurist. In How to Create a Mind, Kurzweil presents a provocative exploration of the most important project in human-machine civilization—reverse engineering the brain to understand precisely how it works and using that knowledge to create even more intelligent machines.

Discussion starts September 24th.

Book Club: Code Economy: Finale on the Blockchain

From All you need to know about blockchain

Welcome to our final discussion of Auerswald’s The Code Economy. Today we will be finishing the text, chapters 13-15. Please feel free to jump in even if you haven’t read the book.

After a hopefully entertaining digression about Peruvian Poutine and Netflix’s algorithms*, we progress to the discussion of Bitcoin and the Blockchain. Now, I don’t know anything about Bitcoin other than the vague ideas I have picked up by virtue of being a person on the internet, but it was an interesting discussion nonetheless.

Auerswald likens blockchain to an old-fashioned accountant’s ledger; the “blocks” are the rectangles in which a business’s earnings and expenses are recorded. If there is any question about a company’s profits, you can look back at the information recorded in the chain of blocks.

The problem with this system is that there is only one ledger. If the accountant has made a mistake (or worse, a theft,) there is nothing else to compare it to in order to determine the mistake.

In the modern, distributed version, there are many copies of the blockchain. If on most of these copies of the chain, block 22 says -$400, and on one copy it says +$400, we conclude that the one that disagrees is most likely in error. Like the works of Shakespeare, there are so many copies out there that a discrepancy a single copy cannot be claimed to be authoritative; it is the collective body of work that matters.

“Blockchain” is probably going to get used here as a metaphor for “distributed systems of confirming authority” a lot. For example, “Democracy is a blockchain for deciding who gets to rule a country.” Or “science is a blockchain.”

In Rhodes’s “The Making of the Atomic Bomb,” he recounts the process by which something becomes accepted as “true” (or reasonably likely to be true,) in the scientific community. Let’s suppose scientist M is the foremost authority in his field–perhaps organic LEDs. Scientists L and N are doing work that overlaps M’s, and can therefore basically evaluate M’s work and vouch for whether they think it is sound or not. Scientists J, K, O, and P do work that overlaps a lot with L and N and a little with M; they can evaluate M’s work a little and vouch for whether they think L and N are trustworthy. The chain continues down to little cats scientists A and Z, who can’t really evaluate scientist M, but can tell you whether or not they think B and Y’s results are trustworthy.

This community of science has both good and bad. In general, the structure of science has been extremely successful at inventing things like computers, atomic bombs, and penicillin; at times it creates resistance to new ideas just because they are so far outside of the mainstream of what other scientists are doing. For example, Ignaz Semmelweis, a physician, discovered that he could reduce maternal deaths at his hospital from around 10-18% to 2% simply by insisting that obstetricians wash their hands between dissecting cadavers and delivering babies. Unfortunately, the rest of the medical establishment had not yet accepted the Germ Theory of disease and believed that disease was caused by imbalanced humors. Semmelweis’s idea that invisible corpse particles were somehow transferring corpse-ness from dead people to live people seemed absurd, and further, blamed the doctors themselves for the deaths of their patients. Semmelweis’s tragic tail ends with him being stomped to death in an insane asylum. (His mental ill-health was probably induced by a combination of the stress of being rejected by his profession; and syphilis, contracted via charity work delivering babies for destitute prostitutes.)

Luckily for mothers everywhere, medical science eventually caught up with Semmelweis and puerperal fever is no longer a major concern for laboring women. Science, it seems, can correct itself. (We may want to be cautious about being too eager to reject new ideas–especially in cases where there is clearly a lot of room for improvement, like an 18% death rate.)

But back to the blockchain. In India:

Niti Aayog is working with Apollo Hospitals and information technology major Oracle on applying blockchain (decentralised) technology in pharmaceutical supply chain management to detect spurious drugs, Chief Executive Officer of NITI Aayog Amitabh Kant said here today.

Addressing a gathering through video-conferencing at the inaugural session of International Blockchain Congress 2018 for which Niti Aayog was a co-host, Kant said the organisation was working on applying the blockchain technology to pressing problems of the country in areas such as land registry, health records and fertiliser subsidy distribution m among others.

Further:

Blockchain technology can enable India to find solutions to huge logjams in courts …

With two-thirds of all civil cases pertaining to registration of property or land, the country’s policy think-tank is working with judiciary to find disruptive ways to expedite registrations, mutations and enable a system of smart transactions that is free of corruption and middlemen.

… There are three crore cases currently pending in Indian courts, including 42.5 lakh cases in high courts and 2.6 crore* cases in lower courts. Even if 100 cases are disposed off every hour without sleeping and eating, it would take more than 35 years to catch up, he said. …

On transforming the land registry system using blockchain, Niti Aayog is in advanced stage of implementing proof of concept pilot in Chandigarh to assess its potential to solve the problem of India’s land-based registry system. …

“It’s powerful because it allows multiple parties to collaborate and come to consensus without any need of third party,” he said.

*A crore is an Indian unit equivalent to 10 million.

I probably do not need to review Auerswald’s summary of Bitcoin’s history, as you are probably already well aware of it, but the question of “is Bitcoin real money?” is interesting. In 1875, Jevons, “cofounder of the neoclassical school in economics,” wrote that a material used as money should have the following traits:

“1 Utility and value

2 Portability

3 Indestructibility

4 Homogeneity

5 Divisibility

6 Stability of value

7 Cognizability.”

I am not sure about all of the items on this list; cigarettes and ramen noodles, for example, are used as currency in prisons, even though they are very easy to destroy. It seems like using a currency that you are going to eat would be problematic, yet the pattern recurs over and over in prisons (where perhaps people cannot get their hands on non-consumable goods, or perhaps people simply have no desire for non-consumable ornaments like gold.)

Gold–the “gold standard” of currencies–is a big odd to me, because it has very few practical uses. You can’t eat it. You can’t plant with it, cure parasites with it, or build with it. Lots of people talk about how you’d want a hard currency like gold in the case of societal collapse in which people stop accepting fiat currency, but if zombies were invading, the gas stations had run out of gasoline, and the grocery stores were out of food, I can’t imagine that I’d trade what few precious commodities I had left for a pile of rocks.

People argue that fiat currency is “just paper,” but gold is “just rocks,” and unless you’re a jeweler, the value of either is dependent entirely on your expectation that other people will accept them as currency.

Auerswald writes:

For the past 40 years the world’s currencies have been untethered from gold or any other metal. National “fiat” currencies are nothing more or less than tradeable trust, whose function as currency is based entirely on government-enforced scarcity an verifiability not tethered to its intrinsic usefulness.

I think Auerswald overlooks the role of force in backing fiat currencies. We don’t use Federal Reserve Notes because we trust the government like it’s our best friend from the army who pulled us out of a burning foxhole that one time. We use Federal Reserve Notes because the US government has a lot of guns and bombs to back up its claim that this is real money.

Which means the power of a dollar is dependent on the US government’s ability to enforce that value.

As for Bitcoin:

Bitcoin… satisfies all the criteria for being “money” that William Stanley Jevon set forth… with on exception intrinsic utility and value. That does not mean that Bitcoin will grow in significance as a means of exchange, much less achieve any position of dominance. But with digital transactions via mobile phones–Apple Pay and the like–becoming ever more command the concept of a digital currency not backed by any government gaining rapid acceptance, the prospect of one or another digital currency competing successfully with fiat currencies is not nearly as far-fetched today as it was even three years ago.

The biggest problems I see for digital currencies:

  1. Keeping value–if people decide they won’t accept DogeCoin, then what do you do with all of your DogeCoins?
  2. Ease of entry into the market makes it difficult for any one Coin to retain value
  3. Most people are happy using currencies not associated with illegal activity
You mean you can just make more of these things? Mugabe is brilliant!

The upside to digital currencies is they may be a real blessing for people caught in countries where local fiat currencies are being manipulated all to hell.

Anyway, Auerswald envisions a world in which blockchains (with coins or not) enable a world of peer-to-peer authentication and transactions:

By their very structure, Peer-to-peer platforms start out being distributed. The challenge is how to organize all of the energy contained in such networks so that people are rewarded fairly for their contributions. …Blockchain-based systems for governing peer0to0eer networks hold the promise–so far unrealized–of incorporating the best features of markets when it comes to rewarding contribution and of organizations when it comes to keeping track of reputations.

In other words, in areas where economies are held back because the local governments do a bad job of enforcing contracts and securing property rights, “blockchain”-like algorithms may be able to step into the gap and provided an accepted, distributed, alternative system of enforcement and authentication.

(This is the point where I start ranting to anyone within earshot about communists not recognizing the necessity of secure property rights so that people can turn their property into capital in order to start businesses. Without that seed money to start a business, you can’t get started. Even something simple, like driving for Uber, requires a car to start with, and cars cost money. If you can’t depend on having money tomorrow because all of your property just got confiscated, or you can’t depend on having a car tomorrow because private property is for bourgeois scum, then you can’t get a job driving for Uber. If no one can convert property to capital and thus to businesses, then you don’t get business and you have no economy and people suffer.

Communists see that some people have property that they can convert to capital and other people don’t have said capital, and their solution is to just take everybody’s stuff away and declare the problem fixed, when what they really want is for everyone to have enough basic property and capital to be able to start their own business.]

But back to Auerswald:

Earlier… I alluded to the significant advance in democracy, science, and financial systems that occurred simultaneously during …the Age of Enlightenment. That systems of governance, inquiry, and economics should have advanced all at the same time… is no coincidence at all. Each of these foundational developments in human social evolution is, at its core, an algorithm for authentication and verification. …

It is only because of the disciplinary fragmentation of inquiry that has occurred in the past century that we do not immediately perceive in the evolved historical record the patterns connecting systems of authentication and verification in politics, science, and economics as they have jointly evolved. … Illuminating those patterns has been the point of this book.

Chapter 14 begins with a history of Burning Man, which the author defends thus:

Still, it makes for an interesting case study in the building of cities (and why laws get enacted): Like everything about Black Rock City, the layout is the product of both planning and evolution. Cities are what physicists refer to as dissipative structures: highly complex organisms worse existence depend on a constant throughput of energy. If you were to close down all bridges and tunnels into New your City … grocery stores would have only a three-day supply of food. The same is generally true of a city’s other energy requirements. All cities are temporary, and they survive only because we feed them. …

The evolution of Black Rock is for urbanists what a real-life Jurassic Park would be for a Paleontologist. We really have no idea what the experience of living in humanity’s first cities might have been–whether Uruk in Mesopotamia or Catalhoyuk in Anatolia. And yet all cities also have elements of planning. Where Black Rock City has its Larry Harvey, London had its Robert Hooke and Washington, D.C., had its Pierre L’Enfant.  Each had a notion of how to bound a space, build symmetry and flow, and in so doing provide a platform where the human experience can unfold.

I have a somewhat dim view of “Burning Man” as a communist utopia that’s only open to rich people, filled with environmentalist hippies leaving an enormous carbon footprint in order to get high with a close-knit community of 70,000 other people, but maybe my sight is obscured from the outside.

The question remains, though: will code be a blessing, or a curse? What happens to employment as “traditional” jobs disappear? Will blockchain and other new platforms and technologies make us freer, or simply find new ways to control us?

The advance of code reduces individual power and autonomy while it increases individual capabilities and freedom.

So far, Auerswald points out, there has been good reason to be optimistic:

In 1990, a staggeringly high 43 percent of people in the “developing world,” approximately 1.9 billion people, lived in extreme poverty. By 2010, that number had fallen to 21 percent. …

For the past two centuries, the vehicle for that progress has been the continual capacity of economies to generate more and better jobs. … “Gallup has discovered that having a good job is now the great global dream … ‘A good job’ is now more important than having a family, more compelling than democracy and freedom, religion, peace and so on… Stimulating job growth is the new currency of all leaders because if you don’t deliver on it you will experience instability, brain drain, sometimes revolution…

There is something concerning about this, though. “Jobs creation” is now widely agreed to be in the hands of national leaders, not individuals. Ordinary people are no longer seen as drivers of innovation. People can start businesses, of course, but whether those businesses survive or fail depends on the government; for the average person, jobs are no longer created by human ingenuity but awarded by an opaque power structure.

Thus the liberal claim that “structural racism” (rather than “individual racism”) is the real cause of continued black impoverishment and high unemployment rates. In a world where employment is granted or withheld by the powerful based on whether or not they like you, not based on your own innate ability to make your own economic contribution to the world, then it is imperative to make sure that the powerful see it as important to employ people like you.

It is, in sum, an admission of the powerlessness of the individual.

Still, Auerswald is hopeful that with the rise of the Peer-to-Peer economy and end of traditional factory work, not only will work be more interesting (as boring, repetitive jobs are most easily automated,) but also that people will no longer be dependent on the whims of a small set of powerful people for access to jobs.

I think he underestimates how useful it is to have steady, long-term employment and how difficult it is for individuals to compete against established corporations that have much larger economies of scale and access to far more relevant data than they do. Take, for example, YouTube vs. Netflix. Netflix can use its troves of data to determine which kinds of shows customers would like to watch more of, then hire people to make those shows. This is pretty nice work if you can get it. YouTube, of course, just lets pretty much anyone put up any video they want, and most of the videos are probably pretty dull, but a few YouTubers put up quality material and an even smaller few actually make a decent amount of money. YouTuber PewDiePie, for example, holds the record at 61+ million subscribers, which has earned him $124 million. But most people who try to become YouTube stars do not become PewDiePie; most earn very little. And why should they, when most of them are amateurs low-budget amateurs with no data on what audiences are interested in going up against other TV options like Orange is the New Black, Breaking Bad, and yes, PewDs himself?

I have a friend who is a very talented amateur clothing designer and dressmaker. I have encouraged her to open a shop on Etsy and try sell some of her creations, but can she really compete with Walmart, The Gap, or Nordstrom? Big Clothing has a massive lead in terms of factories mass-producing clothes for sale. (Her only hope would be to extremely upscale–wedding dresses, movie costumes, etc.)

So what does the future hold?

In the next round of digital disruption, tasks that can be automated (the “high-volume, low-price” option resulting from ongoing code-driven bifurcations…) will yield only small dividends for most people. The exception is the relatively small number of people who will maintain the platforms on which such tasks are performed…

The promising pathway for inclusive well-being is humanized work (the “low-volume, high-price” pathway resulting from ongoing code-driven bifurcations…) this pathway includes everything about value creation that is differentiated, personal, and human.

In his Conclusion, Auerswald writes:

To be human is to think critically. To collaborate, to Communicate. To be creative. What we call “the economy’ is one extension of these activities. It is the domain in which we develop and advance code.

From Ray Kurzweil

But the singularity approaches:

We are not at the center of our cognitive universe. Our own creations are eclipsing us.

For each of us, redefining work requires nothing less than redefining identity. This is because production is not something human beings do just to consume. In fact, the opposite is true. We are living beings. We consume in order to produce.

Well, that’s the end of the book. I hope you have enjoyed it as much as I have. What do you think the future holds? Where do you think code is taking the economy? What are the best–and worst–opportunities for growth? And what (if anything) should we read next?

 

 

*An Aside On Netflix and the use of algorithms to produce movies/TV:

…consider the fate of two films that premiered the same night at the 2015 Sundance Film Festival. … One of these films, What Happened, Miss Simone? was a documentary about singer and civil rights icon Nina Simone. That film was funded by Netflix, whose corporate decision to back the film was based in part on insights algorithmically gleaned from the vast trove of data it has collected on users of its streaming video and movie rental services. The second film was a comedy titled The Bronze, which featured television star Melissa Rauch as a vulgar gymnast. The Bronze was produced by Duplass Brothers production and privately financed by “a few wealthy individual” whose decision to back the film was presumably not based on complex impersonal algorithms but rather, as has been the Hollywood norm, on business intuition.

I’ve often wondered why so many terrible movies get made.

A documentary about a Civil Rights leader might not be everyone’s cup of tea (people like to say they watch intellectual movies more than they actually do,) but plenty of people will at least abstractly like it. By contrast, a “vulgar gymnast” is not an interesting premise for a movie. Vulgarity can be funny when it is contrasted with something typically not vulgar–eg, “A vulgar mobster and a pious nun team up to save an orphanage,” or even “A vulgar nun and pious mobster…” The humor lies in the contrast between purity and vulgarity. But gymnasts aren’t known for being particularly pure or vulgar–they’re neutral–so there’s no contrast in this scenario. A vulgar gymnast doesn’t sound funny, it sounds rude and unpleasant. And this is the one sentences summary chosen to represent the movie? Not a good sign.

As you might have guessed already, What Happened, Miss Simone, did very well, and The Bronze was a bomb. It has terrible reviews on IMDB and Rotten Tomatoes. As folks have put it, it’s just not funny.

Davidowitz notes in Everybody Lies that the industries most ripe for “big data”fication are the ones where the current data is not very good. Industries where people work more on intuition than analysis. For example, the choice of horses in horse racing, until recently, was based on pedigree and intuition–what experienced horse people thought seemed promising in a foal. There was a lot of room in horse racing for quantification and analysis–and the guy who started using mobile x-ray machines to measure horse’s heart and lung sizes was able to make significantly better predictions than people who just looked at the horses’s outsides. By contrast, hedge funds have already put significant effort into quantifying what the prices of different stocks are going to do, and so it is very hard to do better data analysis than they already are.

The selection of movies and TV pilots to fund fall more into the “racing horses picked by intuition” category than the “extremely quantified hedge funds” category, which means there’s lots of room for improvement for anyone who can get good data on the subject.

Incidentally, “In 2015… Netflix accounted for almost 37 percent of all downstream internet traffic in North America during peak evening hours.”

Code Economy ch. 12: How do you LVT a Digital Land?

Welcome back to EvX’s Book Club. Today we are discussing ch. 12 of Auerswald’s The Code Economy: Equity: Progress and Poverty.

We have discussed before the Georgist notion that the increase in poverty that accompanies progress (or development) is due to skyrocketing rents in urban (that is, productive) areas, which lead to rentiers capturing an increasing percent of the wealth created by development.

Indeed, as has been noted elsewhere and in Auerswald’s discussion of Piketty’s Capital in the Twenty-First Century:

The much discussed increase in inequality since the 1970s that Piketty documents is primarily about one thing: the increasing value of real estate, an asset that is disproportionately held by the wealthy.

Auerswald has an interesting discussion of Total Factor Productivity (TFP) that I’d like to pause to discuss:

The calculation of TFP requires measures of aggregate output, capital, and labor. The measurement of each of these is inherently difficult.

Auerswald argues that TFP is particularly bad at measuring the value added by the internet. Quoting economics blogger Justin Fox:

Forty years ago the cost to copy [an S&P 500 firm] as about 5/6 of the total stock price of that firm. So 1/6 of that stock price represented the value of things you couldn’t easily copy, like patents, customer goodwill, employee goodwill, regulator favoritism, and hard to see features of company methods and culture. Today it costs only 1/6 of the stock price to copy all of a firm’ visible items and features that you can legally copy. So today the other 5/6 of the stock price represents the value of all those things you can’t copy.

(Or these companies are massively over-valued.)

In other words, if you owned a textile mill, the value of the company would be based on the value of the physical objects inside your mill. A mill with ten state of the art looms could produce twice as much cloth as a mill with only 5 looms. A mill with 100 looms would produce 10 times as much cloth. The comany’s value and its physical capital would be directly linked.

By contrast, if you suddenly became the sole owner of Twitter, your physical capital and the company’s value would hardly be related. What is Twitter’s physical capital? A bunch of computers in a building somewhere? An entrepreneur could not create a company with twice Twitter’s value by simply buying twice as many computers and putting them in twice as many buildings.

Whatever Twitter’s value may be, very little of it lies in physical equipment. Very little of it lies in buildings or land. Much of it, though, lies in digital land. Just as landlords derive their wealth from the benefits people derive from being near other, economically productive people, so Twitter’s value lies in the desire of people to be near other people in digital spaces:

Economic geography has taught us that the “best localities” will be the place where the returns to density are greatest… Land in “the best localities” increases in value because cities offer people tangible economic returns that derive from density and interconnection.

Please discuss the implications for

1. Third world mega-cities like Karachi or Lagos.

2. Immigration from third world to first world.

3. Digital real estate, like Twitter.

About the digital economy Second Life, Auerswald writes:

Second Life had nearly seven million registered users… Second Life sustained an economy consisting of the production and exchange of virtual goods and service’ it had a GDP equivalent to $500 million, benchmarked by $6 million per month of monetized trade with the real world.

I have been thinking about in-game economies for years, ever since discovering that many online games have their own currencies, which may or may not be legally tradeable for US dollars. But I had not, until this moment, thought of these games as actually modellable like real countries, with economies, exchange rates, and trade with the outside world.

Further:

The virtual and real worlds of entrepreneurship and work are converging in similar ways… “As soon as tens of hundreds of U.S. dollars were sufficient to start a business in Second Life, thousands of people began to tr. Compare this to the real world, where a primary source of funding for small businesses is a second mortgage.” …

Seven years later… The Economist published an article about entrepreneurial startups in the US titled “The Cambrian Explosion,” … This article described how an array of new platforms had dramatically lowered the cost of launching and growing a real-world business: “One explanation for the Cambrian explosion of 540m years ago is that the that time the basic building blocks of life had just been perfected, allowing more complex organisms to be assembled more rapidly. Similarly,t he basic building blocks for digital services and products… have become so evolved, cheap and ubiquitous that they can be easily combined and recombined.”

Auerswald then moves on to the matter of “big data,” which is a big part of how companies like Twitter and Linked In hope to actually make any profits. As I’ve mentioned, I’ve taken a side-tour into “Big Data” that I think was a useful complement to this book; Big Data is the best of what I’ve read so far, nothing has stood out as whiz-bang fabulous. The relevant summary version is that companies like LinkedIn and Facebook are really about the data they gather, rather than the fun you have looking at memes your grandmother reposted. That data, in turn, will probably have a variety of economic uses–though maybe not to you:

And yet, while a large number of people contribute to the value Big Data creates,a relatively small number captures most of the gains. Why is that?

Just as the rentier class gathers most of the benefits from living in a valuable city in close proximity to the engines of human productivity, so do the owners of digital platforms, like Facebook, benefit from the creation of data wealth by their millions of digital citizens.

Digital platforms are the new land; will they also be the new Monopoly?

Auerswald then makes a very interesting observation:

Physical land is yours if, and only if, you have both the right and the practical capacity to prevent other people from accessing it. The same is true of digital land. … That capacity for exclusion–the source of all monetized value derived from digital exchange–depends on the existence of reliable protocols for authentication and verification. … “Open leads to value creation… To capture value you have to find something to close.”

This is so important, I’m tempted to repeat it a few times. Exclusion is the source of all monetized value.

The “brand” (ie, Nike, Apple, Harley Davidson, Harvard,) is modern society’s solution to authentication and verification in modern, anonymous markets. Our ancestors, who engaged primarily in face-to-face transactions with people they knew from their own villages, had no need of brands. They didn’t worry whether they were being tricked into buying knockoff-brand potatoes from farmer Joe; they just bought potatoes.

In the modern economy, it makes a difference whether you get a real Apple computer or a knockoff with an apple sticker slapped on. It matters whether you get real Acetaminophen or a mysterious pill that may or may not contain morphine. It matters whether you buy a brand new Ford or a car cobbled together from the corpses of three totaled station wagons with a new coat of paint.

This, Auerswald argues, is why the government imposes such stiff penalties on people who violate trademarks–violation of the Trademark Counterfeiting act of 1984 can incur a fine of 5 million dollars or 20 years imprisonment.

Yet just as the advance of code has created brands, code is now in the process of undoing them. How? By converting trust directly into code–into algorithmic system for verification and authentication.

Basically, he thinks we’re going to blockchain and Yelp our way into a peer-to-peer economy where people’s online ratings serve as an effective substitute for brands–a world in which angry twitter mobs can crash one’s entire career by giving a bunch of one-star Yelp reviews.

Remember: everything else is downstream from territory.

That’s all for today. Bitcoin and the Blockchain are chapter 13.

Book Club: Code Economy, Ch. 10: In which I am Confused

Welcome back to EvX’s Book Club. Today we start the third (and final) part of Auerswald’s The Code Economy: The Human Advantage.

Chapter 10: Complementarity discuses bifurcation, a concept Auerswald mentions frequently throughout the book. He has a graph of the process of bifurcation, whereby the development of new code (ie, technology), leads to the creation of a new “platform” on the one hand, and new human work on the other. With each bifurcation, we move away from the corner of the graph marked “simplicity” and “autonomy,” and toward the corner marked “complexity” and “interdependence.” It looks remarkably like a graph I made about energy inputs vs outputs at different complexity levels, based on a memory of a graph I saw in a textbook some years ago.

There are some crucial differences between our two graphs, but I think they nonetheless related–and possibly trying to express the same thing.

Auerswald argues that as code becomes platform, it doesn’t steal jobs, but becomes the new base upon which people work. The Industrial Revolution eliminated the majority of farm laborers via automation, but simultaneously provided new jobs for them, in factories. Today, the internet is the “platform” where jobs are being created, not in building the internet, but via businesses like Uber that couldn’t exist without the internet.

Auerswald’s graph (not mine) is one of the few places in the book where he comes close to examining the problem of intelligence. It is difficult to see what unintelligent people are going to do in a world that is rapidly becoming more complicated.

On the other hand people who didn’t have access to all sorts of resources now do, due to internet-based platforms–people in the third world, for example, who never bought land-line telephones because their country couldn’t afford to build the infrastructure to support them, are snapping up mobile and smartphones at an extraordinary rate:

And overwhelming majorities in almost every nation surveyed report owning some form of mobile device, even if they are not considered “smartphones.”

And just like Auerswald’s learning curves from the last chapter, technological spread is speeding up. It took the landline telephone 64 years to go from 0% to 40% of the US market. Mobile phones took only 20 years to accomplish the same feat, and smartphones did it in about 10. (source.)

There are now more mobile phones in the developing world than the first world, and people aren’t just buying just buying these phones to chat. People who can’t afford to open bank accounts now use their smarphones as “mobile wallets”:

According to the GSMA, an industry group for the mobile communications business, there are now 79 mobile money systems globally, mostly in Africa and Asia. Two-thirds of them have been launched since 2009.

To date, the most successful example is M-Pesa, which Vodafone launched in Kenya in 2007. A little over three years later, the service has 13.5 million users, who are expected to send 20 percent of the country’s GDP through the system this year. “We proved at Vodafone that if you get the proposition right, the scale-up is massive,” says Nick Hughes, M-Pesa’s inventor.

But let’s get back to Auerswald. Chapter 10 contains a very interesting description of the development of the development of the Swiss Watch industry. Of course, today, most people don’t go out of their way to buy watches, since their smartphones have clocks built into them. Have smartphones put the Swiss out of business? Not quite, says Auerswald:

Switzerland… today produces fewer than 5 percent of the timepieces manufactured for export globally. In 2014, Switzerland exported 29 million watches, as compaed to China’ 669 million… But what of value? … Swiss watch exports were worth 24.3 billion in 2014, nearly five times as much as all Chinese watches combined.

Aside from the previously mentioned bifurcation of human and machine labor, Auerswald suggests that automation bifurcates products into cheap and expensive ones. He claims that movies, visual art services (ie, copying and digitization of art vs. fine art,) and music have also undergone bifurcation, not extinction, due to new technology.

In each instance, disruptive advances in code followed a consistent and predictable pattern: the creation of a new high-volume, low-price option creates a new market for the low-volume, high-price option. Every time this happens, the new value created through improved code forces a bifurcation of markets, and of work.

Detroit

He then discusses a watch-making startup located in Detroit, which I feel completely and totally misses the point of whatever economic lessons we can draw from Detroit.

Detroit is, at least currently, a lesson in how people fail to deal with increasing complexity, much less bifurcation.

Even that word–bifurcation–contains a problem: what happens to the middle? A huge mass of people at the bottom, making and consuming cheap products, and a small class at the top, making and consuming expensive products–well I will honor the demonstrated preferences of everyone involved for stuff, of whatever price, but what about the middle?

Is this how the middle class dies?

But if the poor become rich enough… does it matter?

Because work is fundamentally algorithmic, it is capable of almost limitless diversification though both combinatorial and incremental change. The algorithms of work become, fairly literally, the DNA of the economy. …

As Geoff Moore puts it, “Digital innovation is reengineering our manufacturing-based product-centric economy to improve quality, reduce cost, expand markets, … It is doing so, however, largely at the expense of traditional middle class jobs. This class of work is bifurcating into elite professions that are highly compensated but outside the skillset of the target population and commoditizing workloads for which the wages fall well below the target level.”

It is easy to take the long view and say, “Hey, the agricultural revolution didn’t result in massive unemployment among hunter-gatherers; the bronze and iron ages didn’t result in unemployed flint-knappers starving in the streets, so we’ll probably survive the singularity, too,” and equally easy to take the short view and say, “screw the singularity, I need a job that pays the bills now.”

Auerswald then discusses the possibilities for using big data and mobile/wearable computers to bring down healthcare costs. I am also in the middle of a Big Data reading binge, and my general impression of health care is that there is a ton of data out there (and more being collected every day,) but it is unwieldy and disorganized and doctors are too busy to use most of it and patients don’t have access to it. and if someone can amass, organize, and sort that data in useful ways, some very useful discoveries could be made.

Then we get to the graph that I didn’t understand,”Trends in Nonroutine Task Input, 1960 to 1998,” which is a bad sign for my future employment options in this new economy.

My main question is what is meant by “nonroutine manual” tasks, and since these were the occupations with the biggest effect shown on the graph, why aren’t they mentioned in the abstract?:

We contend that computer capital (1) substitutes for a limited and well-defined set of human activities, those involving routine (repetitive) cognitive and manual tasks; and (2) complements activities involving non-routine problem solving and interactive tasks. …Computerization is associated with declining relative industry demand for routine manual and cognitive tasks and increased relative demand for non-routine cognitive tasks.

Yes, but what about the non-routine manual? What is that, and why did it disappear first? And does this graph account for increased offshoring of manufacturing jobs to China?

If you ask me, it looks like there are three different events recorded in the graph, not just one. First, from 1960 onward, “non-routine manual” jobs plummet. Second, from 1960 through 1970, “routine cognitive” and “routine manual” jobs increase faster than “non-routine analytic” and “non-routine interactive.” Third, from 1980 onward, the routine jobs head downward while the analytic and interactive jobs become more common.

*Downloads the PDF and begins to read* Here’s the explanation of non-routine manual:

Both optical recognition of objects in a visual field and bipedal locomotion across an uneven surface appear to require enormously sophisticated algorithms, the one in optics and the other in mechanics, which are currently poorly understood by cognitive science (Pinker, 1997). These same problems explain the earlier mentioned inability of computers to perform the tasks of long haul truckers.

In this paper we refer to such tasks requiring visual and manual skills as ‘non-routine manual activities.’

This does not resolve the question.

Discussion from the paper:

Trends in routine task input, both cognitive and manual, also follow a striking pattern. During the  1960s, both forms of input increased due to a combination of between- and within-industry shifts. In the 1970s, however, within-industry input of both tasks declined, with the rate of decline accelerating.

As distinct from the other four task measures, we observe steady within- and between-industry shifts against non-routine manual tasks for the entire four decades of our sample. Since our conceptual framework indicates that non-routine manual tasks are largely orthogonal to computerization, we view
this pattern as neither supportive nor at odds with our model.

Now, it’s 4 am and the world is swimming a bit, but I think “we aren’t predicting any particular effect on non-routine manual tasks” should have been stated up front in the thesis portion. Sticking it in here feels like ad-hoc explaining away of a discrepancy. “Well, all of the other non-routine tasks went up, but this one didn’t, so, well, it doesn’t count because they’re hard to computerize.”

Anyway, the paper is 62 pages long, including the tables and charts, and I’m not reading it all or second-guessing their math at this hour, but I feel like there is something circular in all of this–“We already know that jobs involving routine labor like manufacturing are down, so we made a models saying they decreased as a percent of jobs because of computers and automation, looked through jobs data, and low and behold, found that they had decreased. Confusingly, though, we also found that non-routine manual jobs decreased during this time period, even though they don’t lend themselves to automation and computerization.”

I also searched in the document and could find no instance of the words “offshor-” “China” “export” or “outsource.”

Also, the graph Auerswald uses and the corresponding graph in the paper have some significant differences, especially the “routine cognitive” line. Maybe the authors updated their graph with more data, or Auerswald was trying to make the graph clearer. I don’t know.

Whatever is up with this paper, I think we may provisionally accept its data–fewer factory workers, more lawyers–without necessarily accepting its model.

The day after I wrote this, I happened to be reading Davidowitz’s Everybody Lies: Big Data, New Data, and What the Internet Can Tell us about who we Really Are, which has a discussion of the best places to raise children.

Talking about Chetty’s data, Davidowitz writes:

The question asked: what is the chance that a person with parents in the bottom 20 percent of the income distribution reaches the top 20 percent of the income distribution? …

So what is it about part of the United States where there is high income mobility? What makes some places better at equaling the playing field, of allowing a poor kid to have a pretty good life? Areas that spend more on education provide a better chance to poor kids. Places with more religious people and lower crime do better. Places with more black people do worse. Interestingly, this has an effect on not just the black kids but on the white kids living there as well.

Here is Chetty’s map of upward mobility (or the lack thereof) by county. Given how closely it matches a map of “African Americans” + “Native Americans” I have my reservations about the value of Chetty’s research on the bottom end (is anyone really shocked to discover that black kids enjoy little upward mobility?) but it still has some comparative value.

Davidowitz then discusses Chetty’s analysis of where people live the longest:

Interestingly, for the wealthiest Americans, life expectancy is hardly affected by where you live. …

For the poorest Americans, life expectancy varies tremendously…. living in the right place can add five years to a poor person’s life expectancy. …

religion, environment, and health insurance–do not correlate with longer life spans for the poor. The variable that does matter, according to Chetty and the others who worked on this study? How many rich people live in a city. More rich people in a city means the poor there live longer. Poor people in New York City, for example, live longer than poor people in Detroit.

Davidowitz suggests that maybe this happens because the poor learn better habits from the rich. I suspect the answer is simpler–here are a few possibilities:

1. The rich are effectively stopping the poor from doing self-destructive things, whether positively, eg, funding cultural that poor people go to rather than turn to drugs or crime out of boredom, or negatively, eg, funding police forces that discourage life-shortening crime.

2. The rich fund/support projects that improve general health, like cleaner water systems or better hospitals.

3. The effect is basically just a measurement error that doesn’t account for rich people driving up land prices. The “poor” of New York would be wealthier if they had Detroit rents.

(In general, I think Davidowitz is stronger when looking for correlations in the data than when suggesting explanations for it.)

Now contrast this with Davidowitz’s own study on where top achievers grow up:

I was curious where the most successful Americans come from, so one day I decided to download Wikipedia. …

[After some narrowing for practical reasons] Roughly 2,058 American-born baby boomers were deemed notable enough to warrant a Wikipedia entry. About 30 percent made it through achievements in art or entertainment, 29 percent through sports, 9 percent via politics, and 3 percent in academia or science.

And this is why we are doomed.

The first striking fact I noticed in the data was the enormous geographic variation in the likelihood of becoming a big success …

Roughly one in 1,209 baby boomers born in California reached Wikipedia. Only one in 4,496 baby boomers born in West Virginia did. … Roughly one in 748 baby boomers born in Suffolk County, MA, here Boston is located, made it to Wikipedia. In some counties, the success rate was twenty times lower. …

I closely examined the top counties. It turns out that nearly all of them fit into one of two categories.

First, and this surprised me, many of these counties contained a sizable college town. …

I don’t know why that would surprise anyone. But this was interesting:

Of fewer than 13,000 boomers born in Macon County, Alabama, fifteen made it to Wikipedia–or one in 852. Every single one of them is black. Fourteen of them were from the town of Tuskegee, home of Tuskegee University, a historically black college founded by Booker . Washington. The list included judges, writers, and scientists. In fact, a black child born in Tuskegee had the same probability of becoming a notable in a field outside of ports as a white child born in some of the highest-scoring, majority-white college towns.

The other factor that correlates with the production of notables?

A big city.

Being born in born in San Francisco County, Los Angeles County, or New York City all offered among the highest probabilities of making it to Wikipedia. …

Suburban counties, unless they contained major college towns, performed far worse than their urban counterparts.

A third factor that correlates with success is the proportion of immigrants in a county, though I am skeptical of this finding because I’ve never gotten the impression that the southern border of Texas produces a lot of famous people.

Migrant farm laborers aside, though, America’s immigrant population tends to be pretty well selected overall and thus produces lots of high-achievers. (Steve Jobs, for example, was the son of a Syrian immigrant; Thomas Edison was the son of a Canadian refugee.)

The variable that didn’t predict notability:

One I found more than a little surprising was how much money a state spends on education. In states with similar percentages of its residents living in urban areas, education spending did not correlate with rates of producing notable writers, artists, or business leaders.

Of course, this is probably because 1. districts increase spending when students do poorly in school, and 2. because rich people in urban send their kids to private schools.

BUT:

It is interesting to compare my Wikipedia study to one of Chetty’s team’s studies discussed earlier. Recall that Chetty’s team was trying to figure out what areas are good at allowing people to reach the upper middle class. My study was trying to figure out what areas are good at allowing people to reach fame. The results are strikingly different.

Spending a lot on education help kids reach the upper middle class. It does little to help them become a notable writer, artist, or business leader. Many of these huge successes hated school. Some dropped out.

Some, like Mark Zuckerberg, went to private school.

New York City, Chetty’s team found, is not a particularly good place to raise a child if you want to ensure he reaches the upper middle class. it is a great place, my study found, if you want to give him a chance at fame.

A couple of methodological notes:

Note that Chetty’s data not only looked at where people were born, but also at mobility–poor people who moved from the Deep South to the Midwest were also more likely to become upper middle class, and poor people who moved from the Midwest to NYC were also more likely to stay poor.

Davidowitz’s data only looks at where people were born; he does not answer whether moving to NYC makes you more likely to become famous. He also doesn’t discuss who is becoming notable–are cities engines to make the children of already successful people becoming even more successful, or are they places where even the poor have a shot at being famous?

I reject Davidowitz’s conclusions (which impute causation where there is only correlation) and substitute my own:

Cities are acceleration platforms for code. Code creates bifurcation. Bifurcation creates winners and losers while obliterating the middle.

This is not necessarily a problem if your alternatives are worse–if your choice is between poverty in NYC or poverty in Detroit, you may be better off in NYC. If your choice is between poverty in Mexico and poverty in California, you may choose California.

But if your choice is between a good chance of being middle class in Salt Lake City verses a high chance of being poor and an extremely small chance of being rich in NYC, you are probably a lot better off packing your bags and heading to Utah.

But if cities are important drivers of innovation (especially in science, to which we owe thanks for things like electricity and refrigerated food shipping,) then Auerswald has already provided us with a potential solution to their runaway effects on the poor: Henry George’s land value tax. As George accounts, one day, while overlooking San Francisco:

I asked a passing teamster, for want of something better to say, what land was worth there. He pointed to some cows grazing so far off that they looked like mice, and said, “I don’t know exactly, but there is a man over there who will sell some land for a thousand dollars an acre.” Like a flash it came over me that there was the reason of advancing poverty with advancing wealth. With the growth of population, land grows in value, and the men who work it must pay more for the privilege.[28]

Alternatively, higher taxes on fortunes like Zuckerberg’s and Bezos’s might accomplish the same thing.

Book Club: Code Economy: Economics as Information Theory

If the suggestion… that the economy is “alive” seems fanciful or far-fetched, it is much less so if we consider the alternative: that it is dead.

Welcome back to your regularly scheduled discussion of Auerswald’s The Code Economy: a Forty-Thousand-Year History. Today we are discussing Chapter 9: Platforms, but feel free to jump into the discussion even if you haven’t read the book.

I loved this chapter.

We can safely answer that the economy–or the sum total of human provisioning, consumptive, productive and social activities–is neither “alive” nor, exactly, “non-living.”

The economy has a structure, yes. (So does a crystal.) It requires energy, like a plant, sponge, or macaque. It creates waste. But like a beehive, it is not “conscious;” voters struggle to make any kind of coherent policies.

Can economies reproduce themselves, like a beehive sending out swarms to found new hives? Yes, though it is difficult in a world where most of the sensible human niches have already been filled.

Auerswald notes that his use of the word “code” throughout the book is not (just) because of its modern sense in the coding of computer programs, but because of its use in the structure of DNA–we are literally built from the instructions in our genetic “code,” and society is, on top of that, layers and layers of more code, for everything from “how to raise vegetables” to “how to build an iPhone” to, yes, Angry Birds.

Indeed, as I have insisted throughout, this is more than an analogy: the introduction of production recipes into economics is… exactly what the introduction of DNA i to molecular biology. it is the essential first step toward a transformation of economics into a branch of information theory.

I don’t have much to say about information theory because I haven’t studied information theory, beyond once reading a problem about a couple of people named Alice and Bob who were trying to send messages to each other, but I did read Viktor Mayer-Schönberger and, Kenneth Cukier‘s Big Data: A revolution that will transform how we live, work, and think a couple of weeks ago. It doesn’t rise to the level of “OMG this was great you must read it,” but if you’re interested in the subject, it’s a good introduction and pairs nicely with The Code Economy, as many of the developments in “big data” are relevant to recent developments in code. It’s also helpful in understanding why on earth anyone sees anything of value in companies like Facebook and LinkedIn, which will be coming up soon.

You know, we know that bees live in a hive, but do bees know? (No, not for any meaningful definition of “knowing.”) But imagine being a bee, and slowly working out that you live in a hive, and that the hive “behaves” in certain ways that you can model, just like you can model the behavior of an individual bee…

Anyway:

Economics has a lot to say about how to optimize the level of inputs to get output, but what about the actual process of tuning inputs into outputs? … In the Wonderful World of Widgets that i standard economic, ingredients combine to make a final product, but the recipe by which the ingredients actually become the product is nowhere explicitly represented.

After some papers on the NK model and th shift in organizational demands from pre-industrial economic production to post-industrial large-scale production by mega firms, (or in the case of communism, by whole states,) Auerswald concludes that

…the economy manages increasing complexity by “hard-wiring” solution into standards, which in turn define platforms.

Original Morse Telegraph machine, circa 1835 https://en.wikipedia.org/wiki/Samuel_Morse

This is an important insight. Electricity was once a new technology, whose mathematical rules were explored by cutting-edge scientists. Electrical appliances and the grid to deliver the electricity they run on were developed by folks like Edison and Tesla.

But today, the electrical grid reaches nearly every house in America. You don’t have to understand electricity at all to plug in your toaster. You don’t have to be Thomas Edison to lay electrical lines. You just have to follow instructions.

Electricity + electrical appliances replaced many jobs people used to do, like candle making or pony express delivery man, but electricity has not resulted in an overall loss of jobs. Rather, far more jobs now exist that depend on the electrical grid (or “platform”) than were eliminated.

(However, one of the difficulties or problems with codifying things into platforms is then, systems have difficulty handling other, perfectly valid methods of doing things. Early codification may lock-in certain ways of doing things that are actually suboptimal, like how our computer keyboard layout is intentionally difficult to use not because of anything to do with computers, but because typewriters in the 1800s jammed if people typed on them too quickly. Today, we would be better off with a more sensible keyboard layout, but the old one persists because too many systems use it.)

The Industrial Revolution was a time of first technological development, and then encoding into platforms of many varieties–transportation networks of water and rail; electrical, sewer, and fresh water grids; the large-scale production of antibiotics and vaccines; and even the codification of governments.

The English, a nation of a couple thousand years or so, are governed under a system known as “Common Law,” which is just all of the legal precedents and traditions built up over that time that have come into customary use.

When America was founded, it didn’t have a thousand years of experience to draw on because, well, it had just been founded, but it did have a thousand years of cultural memory of England’s government. English Common Law was codified as the base of the American legal system.

The Articles of Confederation, famous only for not working very well, were the fledgling country’s first attempt at codifying how the government should operate. They are typically described as failing because they allocated insufficient power to the federal government, but I propose a more nuanced take: the Articles laid out insufficient code for dealing with nation-level problems. The Constitution solved these problems and instituted the basic “platform” on which the rest of the government is built. Today, whether we want to ratify a treaty or change the speed limit on i-405, we don’t have to re-derive the entire decisions-making structure from scratch; legitimacy (for better or for worse) is already built into the system.

Since the days of the American and French revolutions, new countries have typically had “constitutions,” not because Common Law is bad, but because there is no need to re-derive from scratch successful governing platforms–they can just be copied from other countries, just as one firm can copy another firms organizational structure.

Continuing with Auerswald and the march of time:

Ask yourself what the greatest inventions were over the past 150 years: Penicillin? The transistor? Electrical power? Each of these has been transformative, but equally compelling candidates include universal time, container shipping, the TCP/IP protocols underlying the Internet, and the GSM and CDMA standards that underlie mobile telephony. These are the technologies that make global trade possible by making code developed in one place interoperable with code developed in another. Standards reduce barriers among people…

Auerswald, as a code enthusiast, doesn’t devote much space to the downsides of code. Clearly, code can make life easier, by reducing the number of cognitive tasks required to get a job done. Let’s take the matter of household management. If a husband and wife both adhere to “traditional gender norms,” such as an expectation that the wife will take care of internal household chores like cooking and vacuuming, and the husband will take care of external chores, like mowing the lawn, taking out the trash, and pumping gas, neither spouse has to ever discuss “who is going to do this job” or wonder “hey, did that job get done?”

Following an established code thus aids efficiency and appears to decrease marital stress (there are studies on this,) but this does not mean that the code itself is optimal. Perhaps men make better dish-washers than women. Or for a couple with a disabled partner, perhaps all of the jobs would be better performed by reversing the roles.

Technological change also encourages code change:

The replacement of manual push-mowers with gas-powered mowers makes mowing the lawn easier for women, so perhaps this task would be better performed by housewives. (Even the Amish have adopted milking machines on the grounds that by pumping the milk away from the cow for you, the machines enable women to participate equally in the milking–a task that previously involved picking up and carrying around 90 lb milkjugs.)

But re-writing the entire code is work and involves a learning curve as both parties sort out and get used to new expectations. (See my previous thread on “emotional labor” and its relation to gender norms.) So even if you think the old code isn’t fair or optimal, it still might be easier than trying to make a new code–and this extends to far more human relations than just marriage.

And then you get cases where the new technology is incompatible with the old code. Take, for example, the relationship between transportation, weights and measures, and the French Revolution.

A country in which there is no efficient way to get from Point A to Point B has no need for a standardized set of weights and measures, as people in Community A will never encounter or contend with whatever system they are using over in Community B. Even if a king wanted to create a standard system, he would have difficulty enforcing it. Instead, each community tends to evolve a system that works well for its own needs. A community that grows bananas, for example, will come up with measures suitable to bananas, like the “bunch,” a community that deals in grain will invent the “bushel,” and a community that enumerates no goods, like the Piraha, will not bother with large quantities quantities.

(Diamonds are measured in “carats,” which have nothing to do with the orange vegetable, but instead are derived from the seeds of the carob tree, which apparently are small enough to be weighed against small stones.)

Since the French paid taxes, there was some demand for standardized weights and measures within each province–if your taxes are “one bushel of grain,” you want to make sure “bushel” is well defined so the local lord doesn’t suddenly define this year’s bushel as twice as big as last year’s bushel–and likewise, the lord doesn’t want this year’s bushel to be defined as half the size as last year’s.

But as roads improved and trade increased, people became concerned with making sure that a bushel of grain sold in Paris was the same as a bushel purchased in Nice, or that 5 carats of diamonds in Bordeaux was still 5 carats when you reached Cognac.

But the established local power of the local nobility made it very hard to change whatever measures people were using in each individual place. That is, the existing code made it hard to change to a more efficient code, probably because local lords were concerned the new measures would result in fewer taxes, and the local peasants concerned it would result in higher taxes.

Thus it was only with the decapitation of the Ancien Regime and wiping away of the privileges and prerogatives of the nobility that Revolutionary France established, as one of its few lasting reforms, a universal system of weights and measures that has come down to us today as the metric or SI system.

Now, speaking as an American who has been trained in both Metric and Imperial units, using multiple systems can be annoying, but is rarely deadly. On the scale of sub-optimal ideas, humans have invented far worse.

Quoting Richard Rhodes, The Making of the Atomic Bomb:

“The end result of the complex organization that was the efficient software of the Great War was the manufacture of corpses.

This essentially industrial operation was fantasized by the generals as a “strategy of attrition.” The British tried to kill Germans, the Germans tried to kill British and French and so on, a “strategy” so familiar by now that it almost sounds normal. It was not normal in Europe before 1914 and no one in authority expected it to evolve, despite the pioneering lessons of the American Civil War. Once the trenches were in place, the long grave already dug (John Masefield’s bitterly ironic phrase), then the war stalemated and death-making overwhelmed any rational response.

“The war machine,” concludes Elliot, “rooted in law, organization, production, movement, science, technical ingenuity, with its product of six thousand deaths a day over a period of 1,500 days, was the permanent and realistic factor, impervious to fantasy, only slightly altered by human variation.”

No human institution, Elliot stresses, was sufficiently strong to resist the death machine. A new mechanism, the tank, ended the stalemate.”

Russian Troops waiting for death

On the Eastern Front, the Death Machine was defeated by the Russian Revolution, as the canon fodder decided it didn’t want to be turned into corpses anymore.

I find World War I more interesting than WWII because it makes far less sense. The combatants in WWII had something resembling sensible goals, some chance of achieving their goals, and attempted to protect the lives of their own people. WWI, by contrast, has no such underlying logic, yet it happened anyway–proof that seemingly logical people can engage in the ultimate illogic, even as it reduces whole countries to nothing but death machines.

Why did some countries revolt against the cruel code of war, and others not? Perhaps an important factor is the perceived legitimacy of the government itself (though regular food shipments are probably just as critical.) Getting back to information theory, democracy itself is a kind of blockchain for establishing political legitimacy, (more on this in a couple of chapters) which may account for why some countries perceived their leadership as more legitimate, and other countries suddenly discovered, as information about other people’s opinions became easier to obtain, that the government enjoyed very little legitimacy.

But I am speculating, and have gotten totally off-topic (Auerswald was just discussing the establishment of TCP/IP protocols and other similar standards that aid international trade, not WWI!)

Returning to Auerswald, he cites a brilliant quote from Alfred North Whitehead

“Civilization advances by extending the number of operations we can perform without thinking about them.”

As we were saying, while sub-optimal (or suicidal) code can and does destroy human societies, good code can substantially increase human well-being.

The discovery and refinement of new inventions, technologies, production recipes, etc., involves a steep learning curve as people first figure out how to make the thing work and to source and put together all of the parts necessary to build it, (eg, the invention of the automobile in the late 1800s and early 1900s,) but once the technology spreads, it simply becomes part of the expected infrastructure of everyday life (eg, the building of interstate highways and gas stations, allowing people to drive cars all around the US,) a “platform” on which other, future innovations build. Post-1950, most automobile-driven innovation was located not in refinements to the engines or brakes, but in things you can do with vehicles, like long-distance shipping.

Interesting things happen to income as code becomes platforms, but I haven’t worked out all of the details.

Continuing with Auerswald:

Note that, in code economics, a given country’s level of “development” is not sensibly measured by the total monetary value of all goods and services it produces…. Rather, the development of a country consists of … it capacity to execute more complex code. …

Less-developed countries that lack the code to produce complex products will import them, and they will export simpler intermediate products and raw materials in order to pay for the required imports.

By creating and adhering to widely-observed “standards,” increasing numbers of countries (and people) are able to share inventions, code, and development.

Of the drivers of beneficial trade, international standards are at once among the most important and the least appreciated. … From the invention of bills of exchange in the Middle Ages … to the creation of twenty-first-century communications protocols, innovations in standards have lowered the cost and enhanced the value of exchange across distance. …

For entrepreneurs in developing countries, demonstrated conformity with international standards… is a universally recognized mark of organizational capacity that substantially eases entry into global production and distribution networks.

In other words, you are more likely to order steel from a foreign factory if you have some confidence that you will actually receive the variety you ordered, and the factory can signal that it knows what it is doing and will actually deliver the specified steel by adhering to international standards.

On the other hand, I think this can degenerate into a reliance on the appearance of doing things properly, which partially explains the Elizabeth Holmes affair. Holmes sounded like she knew what she was doing–she knew how to sound like she was running a successful startup because she’d been raised in the Silicon Valley startup culture. Meanwhile, the people investing in Holmes’s business didn’t know anything about blood testing (Holmes’s supposed invention tested blood)–they could only judge whether the company sounded like it was a real business.

Auerswald then has a fascinating section comparing each subsequent “platform” that builds on the previous “platform” to trophic levels in the environment. The development of each level allows for the development of another, more complex level above it–the top platform becomes the space where newest code is developed.

If goods and services are built on platforms, one atop the other, then it follows that learning at higher levels of the system should be faster than learning at lower levels, for the simple reason that leaning at higher levels benefits from incremental learning all the way down.

There are two “layers” of learning. Raw material extraction shows high volatility around a a gradually increasing trend, aka slow learning. By contrast, the delivery of services over existing infrastructure, like roads or wireless networks, show exponential growth, aka fast learning.

In other words, the more levels of code you already have established and standardized into platforms, the faster learning goes–the basic idea behind the singularity.

 

That’s all for today. See you next week!

Book Club: The Code Economy: The DNA of Business

“DNA builds products with a purpose. So do people.” –Auerswald, The Code Economy

McDonald’s is the world’s largest restaurant chain by revenue[7], serving over 69 million customers daily in over 100 countries[8] across approximately 36,900 outlets as of 2016.[9] … According to a BBC report published in 2012, McDonald’s is the world’s second-largest private employer (behind Walmart with 1.9 million employees), 1.5 million of whom work for franchises. …

There are currently a total of 5,669 company-owned locations and 31,230 franchised locations… Notably, McDonald’s has increased shareholder dividends for 25 consecutive years,[18] making it one of the S&P 500 Dividend Aristocrats.[19][20]

According to Fast Food Nation by Eric Schlosser (2001), nearly one in eight workers in the U.S. have at some time been employed by McDonald’s. … Fast Food Nation also states that McDonald’s is the largest private operator of playgrounds in the U.S., as well as the single largest purchaser of beef, pork, potatoes, and apples.  (Wikipedia)

How did a restaurant whose only decent products are french fries and milkshakes come to dominate the global corporate landscape?

IKEA is not only the world’s largest furniture store, but also among the globe’s top 10 retailers of anything and the 25th most beloved corporation. (Disney ranks number one.) Even I feel a strange, heartwarming emotion at the thought of IKEA, which somehow comes across as a sweet and kind multi-national behemoth.

In The Code Economy, Auerswald suggests that the secret to McDonald’s success isn’t (just) the french fries and milkshake machines:

Kroc opened his first McDonald’s restaurant in 1955 in Des Plaines, California. Within five years he had opened two hundred new franchises across the country. [!!!] He pushed his operators obsessively to adhere to a system that reinforced the company motto: “Quality, service, cleanliness, and value.”

h/t @simongerman600

Quoting Kroc’s1987 autobiography,

“It’s all interrelated–our development of the restaurant, the training, the marketing advice, the product development, the research that has gone into each element of the equipment package. Together with our national advertising and continuing supervisory assistance, it forms an invaluable support system. Individual operators pay 11.5 percent of their gross to the corporation for all of this…”

The process of operating a McDonald’s franchise was engineered to be as cognitively undemanding as possible. …

Kroc created a program that could be broken into subroutines…. Acting like the DNA of the organization, the manual allowed the Speedee Service System to function in a variety of environments without losing essential structure or function.

McDonald’s is big because it figured out how to reproduce.

source: Statista

I’m not sure why IKEA is so big (I don’t think it’s a franchise like McDonald’s,) but based on the information posted on their walls, it’s because of their approach to furniture design. First, think of a problem, eg, People Need Tables. Second, determine a price–IKEA makes some very cheap items and some pricier items, to suit different customers’ needs. Third, use Standard IKEA Wooden Pieces to design a nice-looking table. Fourth, draw the assembly instructions, so that anyone, anywhere, can assemble the furniture themselves–no translation needed.

IKEA furniture is kind of like Legos, in that much of it is made of very similar pieces of wood assembled in different ways. The wooden boards in my table aren’t that different in size and shape from the ones in my dresser nor the ones in my bookshelf, though the items themselves have pretty different dimensions. So on the production side, IKEA lowers costs by producing not actual furniture, but collections of boards. Boards are easy to make–sawmills produce tons of them.

Furniture is heavy, but mostly empty space. By contrast, piles of boards stack very neatly and compactly, saving space both in shipping and when buyers are loading the boxes into their cars. (I am certain that IKEA accounts for common car dimensions in designing and packing their furniture.)

And the assembly instruction allow the buyer to ultimately construct the furniture.

In other words, IKEA has hit upon a successful code that allows them to produce many different designs from a few basic boards and ship them efficiently–keeping costs low and allowing them to thrive.

From Anatomy of an IKEA product:

The company is also looking for ways to maximize warehouse efficiency.

“We have (only) two pallet sizes,” Marston said, referring to the wooden platforms on which goods are placed. “Our warehouses are dimensioned and designed to hold these two pallet sizes. It’s all about efficiencies because that helps keep the price of innovation down.”

In Europe, some IKEA warehouses utilize robots to “pick the goods,” a term of art for grabbing products off very high shelves.

These factories, Marston said, are dark, since no lighting is needed for the robots, and run 24 hours a day, picking and moving goods around.

“You (can) stand on a catwalk,” she said, “and you look out at this huge warehouse with 12 pallets (stacked on top of each other) and this robot’s running back and forth running on electronic eyebeams.”

IKEA’s code and McDonald’s code are very different, but both let the companies produce the core items they sell quickly, cheaply, and efficiently.

In The Code Economy, Chapter 8: Evolution, discusses the rise of Tollhouse Cookies, McDonald’s, the difference between natural and artificial objects, and the development of evolutionary theory from Darwin through Watson and Crick and through to Kauffman and Levine’s 1987 paper, “Toward a General Theory of Adaptive Walks on Rugged Landscapes.” (With a brief stop at Erwin Shrodinger along the way.)

The difficulty with evolution is that systems are complicated; successful mutations or even just combinations of existing genes must work synergistically with all of the other genes and systems already operating in the body. A mutation that increases IQ by tweaking neurons in a particular way might have the side effect of causing neurons outside the brain to malfunction horribly; a mutation that protects against sickle-cell anemia when you have one copy of it might just kill you itself if you have two copies.

Auerswald quotes Kauffman and Levin:

“Natural selection does not work as an engineer works… It works like a tinkereer–a tinkerer who does not know exactly what he is going to produce but uses… everything at his disposal to produce some kind of workable object.” This process is progressive, moving form simpler to more complex forms: “Evolution doe not produce novelties from scratch. It works on what already exists, either transforming a system to give it new functions or combining several systems to produce a more elaborate one [as] during the passage from unicellular to multicellular forms.”

Further:

The Kauffman and Levin model was as simple as it was powerful. Imagine a genetic code of length N, where each gene might occupy one of two possible “states”–for example, “o” and “i” in a binary computer. The difficulty of the evolutionary problem was tunable with the parameter K, which represented the average number of interactions among genes. The NK model, as it came to be called, was able to reproduce a number of measurable features of evolution in biological systems. Evolution could be represented as a genetic walk on a fitness landscape, in which increasing complexity was now a central parameter.

You may remember my previous post on Local Optima, Diversity, and Patchwork:

Local optima–or optimums, if you prefer–are an illusion created by distance. A man standing on the hilltop at (approximately) X=2 may see land sloping downward all around himself and think that he is at the highest point on the graph. But hand him a telescope, and he discovers that the fellow standing on the hilltop at X=4 is even higher than he is. And hand the fellow at X=4 a telescope, and he’ll discover that X=6 is even higher.

A global optimum is the best possible way of doing something; a local optimum can look like a global optimum because all of the other, similar ways of doing the same thing are worse.

Some notable examples of cultures that were stuck at local optima but were able, with exposure, to jump suddenly to a higher optima: The “opening of Japan” in the late 1800s resulted in breakneck industrialization and rising standards of living; the Cherokee invented their own alphabet (technically a syllabary) after glimpsing the Roman one, and achieved mass literacy within decades; European mathematics and engineering really took off after the introduction of Hindu-Arabic numerals and the base-ten system.

If we consider each culture its own “landscape” in which people (and corporations) are finding locally optimal solutions to problems, then it becomes immediately obvious that we need both a large number of distinct cultures working out their own solutions to problems and occasional communication and feedback between those cultures so results can transfer. If there is only one, global, culture, then we only get one set of solutions–and they will probably be sub-optimal. If we have many cultures but they don’t interact, we’ll get tons of solutions, and many of them will be sub-optimal. But many cultures developing their own solutions and periodically interacting can develop many solutions and discard sub-optimal ones for better ones.

On a related note, Gore Burnelli writes: How Nassim Taleb changed my mind about religion:

Life constantly makes us take decisions under conditions of uncertainty. We can’t simply compute every possible outcome, and decide with perfect accuracy what the path forward is. We have to use heuristics. Religion is seen as a record of heuristics that have worked in the past. …

But while every generation faces new circumstances, there are also some common problems that every living being is faced with: survival and reproduction, and these are the most important problems because everything else depends on them. Mess with these, and everything else becomes irrelevant.

This makes religion an evolutionary record of solutions which persisted long enough, by helping those who held them to persist.

This is not saying “All religions are perfect and good and we should follow them,” but it is suggesting, “Traditional religions (and cultures) have figured out ways to solve common problems and we should listen to their ideas.”

From Ray Kurzweil

Back in The Code Economy, Auerswald asks:

Might the same model, derived from evolutionary biology, explain the evolution of technology?

… technology may also be nothing else but the capacity for invariant reproduction. However, in order for more complex forms of technology to be viable over time, technology also must possess a capacity for learning and adaptation.

Evolutionary theory as applied to the advance of code is the focus of the next chapter. Kauffman and Levin’s NK model ends up providing a framework for studying the creation and evolution of code. Learning curves act as the link between biology and economics.

Will the machines become sentient? Or McDonald’s? And which should we worry about?

Book Club: The Code Economy, Chs. 6-7: Learning Curves

Welcome back to EvX’s Book Club: The Code Economy, by Philip Auerswald. Today’s entry is going to be quick, because summer has started and I don’t have much time. Ch 6 is titled Information, and can perhaps be best summarized:

The challenge is to build a reliable economy with less reliable people. In this way, the economy is an information processing organism. …

When I assert that economics must properly be understood as a branch of information theory, I am reffing to the centrality of the communication problem that exists whenever one person attempts to share know-how with another. I am referring, in other words, to the centrality of code.

Auerswald goes on to sketch some relevant background on “code” as a branch of economics:

The economics taught in undergraduate courses is a great example of history being written by the victors. Because the methodologies of neoclassical economics experienced numerous triumphs in the middle of the twentieth century, the study of the distribution of resources within the economy–choice rather than code and, to a lesser extent, consumption rather than  production–came to be taught as the totality of economics. However, the reality is that choice and code have always coexisted, not only in the economy itself but in the history of economic thought.

And, an aside, but interesting:

Indeed, from 1807 to the time Jevons was born, the volume of shipping flowing through the port of Liverpool more than doubled.

However:

And yet, while both the city’s population and worker’s wages increased steadily, a severe economic rift occurred that separated the haves from the have-nots. As wealthy Liverpudlians moved away from the docks to newly fashionable districts on the edges of the city, those left behind in the center of town faced miserable conditions.From 1830 through 1850, life expectancy actually decreased in Liverpool proper from an already miserable 32 years to a shocking 25 years.

(You should see what happened to life expectancies in Ireland around that time.)

A reliable economy built with less reliable people is one in which individuals have very little autonomy, because autonomy means unreliable people messing things up.

Thereafter, a series of economists, Herbert Simon foremost among them, put the challenges of gathering, sharing, and processing economically relevant information at the center of their work.

Taken together and combined with foundational insights from other fields–notably evolutionary biology and molecular biology–the contributions of these economists constitute a distinct domain of inquiry within economics. These contributions have focused on people as producers and on the algorithms that drive the development of the economy.

This domain of inquiry is Code Economics.

I am rather in love with taking the evolutionary model and applying it to other fields, like the spread of ideas (memes) or the growth of cities, companies, economies, or whole countries. That is kind of what we do, here at EvolutionistX.

Chapter 7 is titled Learning: The Dividend of Doing. It begins with an amusing tale about Julia Child, who did not even learn to cook until her mid to late thirties, and then became a rather famous chef/cookbook writer. (Cooking recipes are one of Auerswald’s favored examples of “code” in action.)

Next Auerswald discusses Francis Walker, first president of the American Economic Association. Walker disagreed with the “wage fund theory” and with Jevons’s simplifying assumption that firms can be modeled as simply hiring workers until it cannot make any more money by hiring more workers than by investing in more capital.

Jevons’s formulation pushes production algorithms–how businesses are actually being run–into the background and tradeoffs between labor and capital to the foreground. But as Walker notes:

“We have the phenomenon in every community and in every trade, in whatever state of the market,” Walker observes, “of some employers realizing no profits at all, while other are making fair profits; others, again, large profits; others, still, colossal profits. Side by side, in the same business, with equal command of capital, with equal opportunities, one man is gradually sinking a fortune, while another is doubling or trebling his accumulations.”

The relevant economic data, when it finally became available, confirmed Walker’s belief about the distribution of profits, yet the difference between the high-profit and low-profit firms does not appear to hinge primarily on the question of how much labor should be substituted for capital and vice versa.

Walker argued that more profitable entrepreneurs are that way because they are able to solve a difficult problem more effectively than other entrepreneurs. … three core mechanisms for the advance of code: learning, evolution, and the layering of complexity though the development of platforms.

Moreover:

…in the empirical economics of production, few discoveries have been more universal or significant than that of the firm-level learning curve. As economist James Bessen notes, “Developing the knowledge and skills needed to implement new technologies on a large scale is a difficult social problem that takes a long time to resolve… A new technology typically requires more than an invention in order to be designed, built, installed, operated, and maintained. Initially, much of this new technical knowledge develops slowly because it is learned through experience, not in the classroom.”

Those of you who are familiar with business economics probably find learning curves and firm growth curves boring and old-hat, but they’re new and quite fascinating to me. Auerswald has an interesting story about the development of airplanes and a challenge to develop cheaper, two-seat planes during the Depression–could a plane be built for under a $1,000? $700?

(Make sure to read the footnote on the speed of production of “Liberty Ships.”)

The rest of the chapter discusses the importance of proper firm management for maximizing efficiency and profits. Now, I have an instinctual aversion to managers, due to my perception that they tend to be parasitic on their workers or at least in competition with them for resources/effort, but I can admit that a well-run company is likely more profitable than a badly run one. Whether it is more pleasant for the workers is another matter, as the folks working in Amazon’s warehouses can tell you.

So why are some countries rich and others poor?

Whereas dominant variants of the neoclassical production model emphasize categories such as public knowledge and organization, which can be copied and implemented at zero cost, code economics suggests that such categories are unlikely to be significantly relevant in the practical work of creating the business entities that drive the progress of human society. This is because code at the level of a single company–what I term a “production algorithm”–includes firm-specific components. Producers far from dominant production clusters must learn to produce through a costly process of trial and error. Market-driven advances in production recipes, from which venture with proprietary value can be created, require a tenacious will to experiment, to learn, and to document carefully the results of that learning. Heterogeneity among managers… is thus central to understanding observed differences between regions and nations. …

Management and the development of technical standards combined to enable not just machines but organizations to be interoperable and collaborative. Companies thus could become far bigger and supply chains far more complex than every before.

As someone who actually likes shopping at Ikea, I guess I should thank a manager somewhere.

Auerswald points out that if communication of production algorithms and company methods were perfect and costless, then learning curves wouldn’t exist:

All of these examples underscore the following point, core to code economics: The imperfection of communication is not a theory. It is a ubiquitous and inescapable physical reality.

That’s all for now, but how are you enjoying the book? Do you have any thoughts on these chapters? I enjoyed them quite a bit–especially the part about Intel and the graphs of the distribution of management scores by country. What do you think of France and the UK’s rather lower “management” scores than the US and Germany?

Join us next week for Ch. 8: Evolution–should be exciting!