Book Club: Code Economy, Ch. 10: In which I am Confused

Welcome back to EvX’s Book Club. Today we start the third (and final) part of Auerswald’s The Code Economy: The Human Advantage.

Chapter 10: Complementarity discuses bifurcation, a concept Auerswald mentions frequently throughout the book. He has a graph of the process of bifurcation, whereby the development of new code (ie, technology), leads to the creation of a new “platform” on the one hand, and new human work on the other. With each bifurcation, we move away from the corner of the graph marked “simplicity” and “autonomy,” and toward the corner marked “complexity” and “interdependence.” It looks remarkably like a graph I made about energy inputs vs outputs at different complexity levels, based on a memory of a graph I saw in a textbook some years ago.

There are some crucial differences between our two graphs, but I think they nonetheless related–and possibly trying to express the same thing.

Auerswald argues that as code becomes platform, it doesn’t steal jobs, but becomes the new base upon which people work. The Industrial Revolution eliminated the majority of farm laborers via automation, but simultaneously provided new jobs for them, in factories. Today, the internet is the “platform” where jobs are being created, not in building the internet, but via businesses like Uber that couldn’t exist without the internet.

Auerswald’s graph (not mine) is one of the few places in the book where he comes close to examining the problem of intelligence. It is difficult to see what unintelligent people are going to do in a world that is rapidly becoming more complicated.

On the other hand people who didn’t have access to all sorts of resources now do, due to internet-based platforms–people in the third world, for example, who never bought land-line telephones because their country couldn’t afford to build the infrastructure to support them, are snapping up mobile and smartphones at an extraordinary rate:

And overwhelming majorities in almost every nation surveyed report owning some form of mobile device, even if they are not considered “smartphones.”

And just like Auerswald’s learning curves from the last chapter, technological spread is speeding up. It took the landline telephone 64 years to go from 0% to 40% of the US market. Mobile phones took only 20 years to accomplish the same feat, and smartphones did it in about 10. (source.)

There are now more mobile phones in the developing world than the first world, and people aren’t just buying just buying these phones to chat. People who can’t afford to open bank accounts now use their smarphones as “mobile wallets”:

According to the GSMA, an industry group for the mobile communications business, there are now 79 mobile money systems globally, mostly in Africa and Asia. Two-thirds of them have been launched since 2009.

To date, the most successful example is M-Pesa, which Vodafone launched in Kenya in 2007. A little over three years later, the service has 13.5 million users, who are expected to send 20 percent of the country’s GDP through the system this year. “We proved at Vodafone that if you get the proposition right, the scale-up is massive,” says Nick Hughes, M-Pesa’s inventor.

But let’s get back to Auerswald. Chapter 10 contains a very interesting description of the development of the development of the Swiss Watch industry. Of course, today, most people don’t go out of their way to buy watches, since their smartphones have clocks built into them. Have smartphones put the Swiss out of business? Not quite, says Auerswald:

Switzerland… today produces fewer than 5 percent of the timepieces manufactured for export globally. In 2014, Switzerland exported 29 million watches, as compaed to China’ 669 million… But what of value? … Swiss watch exports were worth 24.3 billion in 2014, nearly five times as much as all Chinese watches combined.

Aside from the previously mentioned bifurcation of human and machine labor, Auerswald suggests that automation bifurcates products into cheap and expensive ones. He claims that movies, visual art services (ie, copying and digitization of art vs. fine art,) and music have also undergone bifurcation, not extinction, due to new technology.

In each instance, disruptive advances in code followed a consistent and predictable pattern: the creation of a new high-volume, low-price option creates a new market for the low-volume, high-price option. Every time this happens, the new value created through improved code forces a bifurcation of markets, and of work.

Detroit

He then discusses a watch-making startup located in Detroit, which I feel completely and totally misses the point of whatever economic lessons we can draw from Detroit.

Detroit is, at least currently, a lesson in how people fail to deal with increasing complexity, much less bifurcation.

Even that word–bifurcation–contains a problem: what happens to the middle? A huge mass of people at the bottom, making and consuming cheap products, and a small class at the top, making and consuming expensive products–well I will honor the demonstrated preferences of everyone involved for stuff, of whatever price, but what about the middle?

Is this how the middle class dies?

But if the poor become rich enough… does it matter?

Because work is fundamentally algorithmic, it is capable of almost limitless diversification though both combinatorial and incremental change. The algorithms of work become, fairly literally, the DNA of the economy. …

As Geoff Moore puts it, “Digital innovation is reengineering our manufacturing-based product-centric economy to improve quality, reduce cost, expand markets, … It is doing so, however, largely at the expense of traditional middle class jobs. This class of work is bifurcating into elite professions that are highly compensated but outside the skillset of the target population and commoditizing workloads for which the wages fall well below the target level.”

It is easy to take the long view and say, “Hey, the agricultural revolution didn’t result in massive unemployment among hunter-gatherers; the bronze and iron ages didn’t result in unemployed flint-knappers starving in the streets, so we’ll probably survive the singularity, too,” and equally easy to take the short view and say, “screw the singularity, I need a job that pays the bills now.”

Auerswald then discusses the possibilities for using big data and mobile/wearable computers to bring down healthcare costs. I am also in the middle of a Big Data reading binge, and my general impression of health care is that there is a ton of data out there (and more being collected every day,) but it is unwieldy and disorganized and doctors are too busy to use most of it and patients don’t have access to it. and if someone can amass, organize, and sort that data in useful ways, some very useful discoveries could be made.

Then we get to the graph that I didn’t understand,”Trends in Nonroutine Task Input, 1960 to 1998,” which is a bad sign for my future employment options in this new economy.

My main question is what is meant by “nonroutine manual” tasks, and since these were the occupations with the biggest effect shown on the graph, why aren’t they mentioned in the abstract?:

We contend that computer capital (1) substitutes for a limited and well-defined set of human activities, those involving routine (repetitive) cognitive and manual tasks; and (2) complements activities involving non-routine problem solving and interactive tasks. …Computerization is associated with declining relative industry demand for routine manual and cognitive tasks and increased relative demand for non-routine cognitive tasks.

Yes, but what about the non-routine manual? What is that, and why did it disappear first? And does this graph account for increased offshoring of manufacturing jobs to China?

If you ask me, it looks like there are three different events recorded in the graph, not just one. First, from 1960 onward, “non-routine manual” jobs plummet. Second, from 1960 through 1970, “routine cognitive” and “routine manual” jobs increase faster than “non-routine analytic” and “non-routine interactive.” Third, from 1980 onward, the routine jobs head downward while the analytic and interactive jobs become more common.

*Downloads the PDF and begins to read* Here’s the explanation of non-routine manual:

Both optical recognition of objects in a visual field and bipedal locomotion across an uneven surface appear to require enormously sophisticated algorithms, the one in optics and the other in mechanics, which are currently poorly understood by cognitive science (Pinker, 1997). These same problems explain the earlier mentioned inability of computers to perform the tasks of long haul truckers.

In this paper we refer to such tasks requiring visual and manual skills as ‘non-routine manual activities.’

This does not resolve the question.

Discussion from the paper:

Trends in routine task input, both cognitive and manual, also follow a striking pattern. During the  1960s, both forms of input increased due to a combination of between- and within-industry shifts. In the 1970s, however, within-industry input of both tasks declined, with the rate of decline accelerating.

As distinct from the other four task measures, we observe steady within- and between-industry shifts against non-routine manual tasks for the entire four decades of our sample. Since our conceptual framework indicates that non-routine manual tasks are largely orthogonal to computerization, we view
this pattern as neither supportive nor at odds with our model.

Now, it’s 4 am and the world is swimming a bit, but I think “we aren’t predicting any particular effect on non-routine manual tasks” should have been stated up front in the thesis portion. Sticking it in here feels like ad-hoc explaining away of a discrepancy. “Well, all of the other non-routine tasks went up, but this one didn’t, so, well, it doesn’t count because they’re hard to computerize.”

Anyway, the paper is 62 pages long, including the tables and charts, and I’m not reading it all or second-guessing their math at this hour, but I feel like there is something circular in all of this–“We already know that jobs involving routine labor like manufacturing are down, so we made a models saying they decreased as a percent of jobs because of computers and automation, looked through jobs data, and low and behold, found that they had decreased. Confusingly, though, we also found that non-routine manual jobs decreased during this time period, even though they don’t lend themselves to automation and computerization.”

I also searched in the document and could find no instance of the words “offshor-” “China” “export” or “outsource.”

Also, the graph Auerswald uses and the corresponding graph in the paper have some significant differences, especially the “routine cognitive” line. Maybe the authors updated their graph with more data, or Auerswald was trying to make the graph clearer. I don’t know.

Whatever is up with this paper, I think we may provisionally accept its data–fewer factory workers, more lawyers–without necessarily accepting its model.

The day after I wrote this, I happened to be reading Davidowitz’s Everybody Lies: Big Data, New Data, and What the Internet Can Tell us about who we Really Are, which has a discussion of the best places to raise children.

Talking about Chetty’s data, Davidowitz writes:

The question asked: what is the chance that a person with parents in the bottom 20 percent of the income distribution reaches the top 20 percent of the income distribution? …

So what is it about part of the United States where there is high income mobility? What makes some places better at equaling the playing field, of allowing a poor kid to have a pretty good life? Areas that spend more on education provide a better chance to poor kids. Places with more religious people and lower crime do better. Places with more black people do worse. Interestingly, this has an effect on not just the black kids but on the white kids living there as well.

Here is Chetty’s map of upward mobility (or the lack thereof) by county. Given how closely it matches a map of “African Americans” + “Native Americans” I have my reservations about the value of Chetty’s research on the bottom end (is anyone really shocked to discover that black kids enjoy little upward mobility?) but it still has some comparative value.

Davidowitz then discusses Chetty’s analysis of where people live the longest:

Interestingly, for the wealthiest Americans, life expectancy is hardly affected by where you live. …

For the poorest Americans, life expectancy varies tremendously…. living in the right place can add five years to a poor person’s life expectancy. …

religion, environment, and health insurance–do not correlate with longer life spans for the poor. The variable that does matter, according to Chetty and the others who worked on this study? How many rich people live in a city. More rich people in a city means the poor there live longer. Poor people in New York City, for example, live longer than poor people in Detroit.

Davidowitz suggests that maybe this happens because the poor learn better habits from the rich. I suspect the answer is simpler–here are a few possibilities:

1. The rich are effectively stopping the poor from doing self-destructive things, whether positively, eg, funding cultural that poor people go to rather than turn to drugs or crime out of boredom, or negatively, eg, funding police forces that discourage life-shortening crime.

2. The rich fund/support projects that improve general health, like cleaner water systems or better hospitals.

3. The effect is basically just a measurement error that doesn’t account for rich people driving up land prices. The “poor” of New York would be wealthier if they had Detroit rents.

(In general, I think Davidowitz is stronger when looking for correlations in the data than when suggesting explanations for it.)

Now contrast this with Davidowitz’s own study on where top achievers grow up:

I was curious where the most successful Americans come from, so one day I decided to download Wikipedia. …

[After some narrowing for practical reasons] Roughly 2,058 American-born baby boomers were deemed notable enough to warrant a Wikipedia entry. About 30 percent made it through achievements in art or entertainment, 29 percent through sports, 9 percent via politics, and 3 percent in academia or science.

And this is why we are doomed.

The first striking fact I noticed in the data was the enormous geographic variation in the likelihood of becoming a big success …

Roughly one in 1,209 baby boomers born in California reached Wikipedia. Only one in 4,496 baby boomers born in West Virginia did. … Roughly one in 748 baby boomers born in Suffolk County, MA, here Boston is located, made it to Wikipedia. In some counties, the success rate was twenty times lower. …

I closely examined the top counties. It turns out that nearly all of them fit into one of two categories.

First, and this surprised me, many of these counties contained a sizable college town. …

I don’t know why that would surprise anyone. But this was interesting:

Of fewer than 13,000 boomers born in Macon County, Alabama, fifteen made it to Wikipedia–or one in 852. Every single one of them is black. Fourteen of them were from the town of Tuskegee, home of Tuskegee University, a historically black college founded by Booker . Washington. The list included judges, writers, and scientists. In fact, a black child born in Tuskegee had the same probability of becoming a notable in a field outside of ports as a white child born in some of the highest-scoring, majority-white college towns.

The other factor that correlates with the production of notables?

A big city.

Being born in born in San Francisco County, Los Angeles County, or New York City all offered among the highest probabilities of making it to Wikipedia. …

Suburban counties, unless they contained major college towns, performed far worse than their urban counterparts.

A third factor that correlates with success is the proportion of immigrants in a county, though I am skeptical of this finding because I’ve never gotten the impression that the southern border of Texas produces a lot of famous people.

Migrant farm laborers aside, though, America’s immigrant population tends to be pretty well selected overall and thus produces lots of high-achievers. (Steve Jobs, for example, was the son of a Syrian immigrant; Thomas Edison was the son of a Canadian refugee.)

The variable that didn’t predict notability:

One I found more than a little surprising was how much money a state spends on education. In states with similar percentages of its residents living in urban areas, education spending did not correlate with rates of producing notable writers, artists, or business leaders.

Of course, this is probably because 1. districts increase spending when students do poorly in school, and 2. because rich people in urban send their kids to private schools.

BUT:

It is interesting to compare my Wikipedia study to one of Chetty’s team’s studies discussed earlier. Recall that Chetty’s team was trying to figure out what areas are good at allowing people to reach the upper middle class. My study was trying to figure out what areas are good at allowing people to reach fame. The results are strikingly different.

Spending a lot on education help kids reach the upper middle class. It does little to help them become a notable writer, artist, or business leader. Many of these huge successes hated school. Some dropped out.

Some, like Mark Zuckerberg, went to private school.

New York City, Chetty’s team found, is not a particularly good place to raise a child if you want to ensure he reaches the upper middle class. it is a great place, my study found, if you want to give him a chance at fame.

A couple of methodological notes:

Note that Chetty’s data not only looked at where people were born, but also at mobility–poor people who moved from the Deep South to the Midwest were also more likely to become upper middle class, and poor people who moved from the Midwest to NYC were also more likely to stay poor.

Davidowitz’s data only looks at where people were born; he does not answer whether moving to NYC makes you more likely to become famous. He also doesn’t discuss who is becoming notable–are cities engines to make the children of already successful people becoming even more successful, or are they places where even the poor have a shot at being famous?

I reject Davidowitz’s conclusions (which impute causation where there is only correlation) and substitute my own:

Cities are acceleration platforms for code. Code creates bifurcation. Bifurcation creates winners and losers while obliterating the middle.

This is not necessarily a problem if your alternatives are worse–if your choice is between poverty in NYC or poverty in Detroit, you may be better off in NYC. If your choice is between poverty in Mexico and poverty in California, you may choose California.

But if your choice is between a good chance of being middle class in Salt Lake City verses a high chance of being poor and an extremely small chance of being rich in NYC, you are probably a lot better off packing your bags and heading to Utah.

But if cities are important drivers of innovation (especially in science, to which we owe thanks for things like electricity and refrigerated food shipping,) then Auerswald has already provided us with a potential solution to their runaway effects on the poor: Henry George’s land value tax. As George accounts, one day, while overlooking San Francisco:

I asked a passing teamster, for want of something better to say, what land was worth there. He pointed to some cows grazing so far off that they looked like mice, and said, “I don’t know exactly, but there is a man over there who will sell some land for a thousand dollars an acre.” Like a flash it came over me that there was the reason of advancing poverty with advancing wealth. With the growth of population, land grows in value, and the men who work it must pay more for the privilege.[28]

Alternatively, higher taxes on fortunes like Zuckerberg’s and Bezos’s might accomplish the same thing.

Book Club: Code Economy: Economics as Information Theory

If the suggestion… that the economy is “alive” seems fanciful or far-fetched, it is much less so if we consider the alternative: that it is dead.

Welcome back to your regularly scheduled discussion of Auerswald’s The Code Economy: a Forty-Thousand-Year History. Today we are discussing Chapter 9: Platforms, but feel free to jump into the discussion even if you haven’t read the book.

I loved this chapter.

We can safely answer that the economy–or the sum total of human provisioning, consumptive, productive and social activities–is neither “alive” nor, exactly, “non-living.”

The economy has a structure, yes. (So does a crystal.) It requires energy, like a plant, sponge, or macaque. It creates waste. But like a beehive, it is not “conscious;” voters struggle to make any kind of coherent policies.

Can economies reproduce themselves, like a beehive sending out swarms to found new hives? Yes, though it is difficult in a world where most of the sensible human niches have already been filled.

Auerswald notes that his use of the word “code” throughout the book is not (just) because of its modern sense in the coding of computer programs, but because of its use in the structure of DNA–we are literally built from the instructions in our genetic “code,” and society is, on top of that, layers and layers of more code, for everything from “how to raise vegetables” to “how to build an iPhone” to, yes, Angry Birds.

Indeed, as I have insisted throughout, this is more than an analogy: the introduction of production recipes into economics is… exactly what the introduction of DNA i to molecular biology. it is the essential first step toward a transformation of economics into a branch of information theory.

I don’t have much to say about information theory because I haven’t studied information theory, beyond once reading a problem about a couple of people named Alice and Bob who were trying to send messages to each other, but I did read Viktor Mayer-Schönberger and, Kenneth Cukier‘s Big Data: A revolution that will transform how we live, work, and think a couple of weeks ago. It doesn’t rise to the level of “OMG this was great you must read it,” but if you’re interested in the subject, it’s a good introduction and pairs nicely with The Code Economy, as many of the developments in “big data” are relevant to recent developments in code. It’s also helpful in understanding why on earth anyone sees anything of value in companies like Facebook and LinkedIn, which will be coming up soon.

You know, we know that bees live in a hive, but do bees know? (No, not for any meaningful definition of “knowing.”) But imagine being a bee, and slowly working out that you live in a hive, and that the hive “behaves” in certain ways that you can model, just like you can model the behavior of an individual bee…

Anyway:

Economics has a lot to say about how to optimize the level of inputs to get output, but what about the actual process of tuning inputs into outputs? … In the Wonderful World of Widgets that i standard economic, ingredients combine to make a final product, but the recipe by which the ingredients actually become the product is nowhere explicitly represented.

After some papers on the NK model and th shift in organizational demands from pre-industrial economic production to post-industrial large-scale production by mega firms, (or in the case of communism, by whole states,) Auerswald concludes that

…the economy manages increasing complexity by “hard-wiring” solution into standards, which in turn define platforms.

Original Morse Telegraph machine, circa 1835 https://en.wikipedia.org/wiki/Samuel_Morse

This is an important insight. Electricity was once a new technology, whose mathematical rules were explored by cutting-edge scientists. Electrical appliances and the grid to deliver the electricity they run on were developed by folks like Edison and Tesla.

But today, the electrical grid reaches nearly every house in America. You don’t have to understand electricity at all to plug in your toaster. You don’t have to be Thomas Edison to lay electrical lines. You just have to follow instructions.

Electricity + electrical appliances replaced many jobs people used to do, like candle making or pony express delivery man, but electricity has not resulted in an overall loss of jobs. Rather, far more jobs now exist that depend on the electrical grid (or “platform”) than were eliminated.

(However, one of the difficulties or problems with codifying things into platforms is then, systems have difficulty handling other, perfectly valid methods of doing things. Early codification may lock-in certain ways of doing things that are actually suboptimal, like how our computer keyboard layout is intentionally difficult to use not because of anything to do with computers, but because typewriters in the 1800s jammed if people typed on them too quickly. Today, we would be better off with a more sensible keyboard layout, but the old one persists because too many systems use it.)

The Industrial Revolution was a time of first technological development, and then encoding into platforms of many varieties–transportation networks of water and rail; electrical, sewer, and fresh water grids; the large-scale production of antibiotics and vaccines; and even the codification of governments.

The English, a nation of a couple thousand years or so, are governed under a system known as “Common Law,” which is just all of the legal precedents and traditions built up over that time that have come into customary use.

When America was founded, it didn’t have a thousand years of experience to draw on because, well, it had just been founded, but it did have a thousand years of cultural memory of England’s government. English Common Law was codified as the base of the American legal system.

The Articles of Confederation, famous only for not working very well, were the fledgling country’s first attempt at codifying how the government should operate. They are typically described as failing because they allocated insufficient power to the federal government, but I propose a more nuanced take: the Articles laid out insufficient code for dealing with nation-level problems. The Constitution solved these problems and instituted the basic “platform” on which the rest of the government is built. Today, whether we want to ratify a treaty or change the speed limit on i-405, we don’t have to re-derive the entire decisions-making structure from scratch; legitimacy (for better or for worse) is already built into the system.

Since the days of the American and French revolutions, new countries have typically had “constitutions,” not because Common Law is bad, but because there is no need to re-derive from scratch successful governing platforms–they can just be copied from other countries, just as one firm can copy another firms organizational structure.

Continuing with Auerswald and the march of time:

Ask yourself what the greatest inventions were over the past 150 years: Penicillin? The transistor? Electrical power? Each of these has been transformative, but equally compelling candidates include universal time, container shipping, the TCP/IP protocols underlying the Internet, and the GSM and CDMA standards that underlie mobile telephony. These are the technologies that make global trade possible by making code developed in one place interoperable with code developed in another. Standards reduce barriers among people…

Auerswald, as a code enthusiast, doesn’t devote much space to the downsides of code. Clearly, code can make life easier, by reducing the number of cognitive tasks required to get a job done. Let’s take the matter of household management. If a husband and wife both adhere to “traditional gender norms,” such as an expectation that the wife will take care of internal household chores like cooking and vacuuming, and the husband will take care of external chores, like mowing the lawn, taking out the trash, and pumping gas, neither spouse has to ever discuss “who is going to do this job” or wonder “hey, did that job get done?”

Following an established code thus aids efficiency and appears to decrease marital stress (there are studies on this,) but this does not mean that the code itself is optimal. Perhaps men make better dish-washers than women. Or for a couple with a disabled partner, perhaps all of the jobs would be better performed by reversing the roles.

Technological change also encourages code change:

The replacement of manual push-mowers with gas-powered mowers makes mowing the lawn easier for women, so perhaps this task would be better performed by housewives. (Even the Amish have adopted milking machines on the grounds that by pumping the milk away from the cow for you, the machines enable women to participate equally in the milking–a task that previously involved picking up and carrying around 90 lb milkjugs.)

But re-writing the entire code is work and involves a learning curve as both parties sort out and get used to new expectations. (See my previous thread on “emotional labor” and its relation to gender norms.) So even if you think the old code isn’t fair or optimal, it still might be easier than trying to make a new code–and this extends to far more human relations than just marriage.

And then you get cases where the new technology is incompatible with the old code. Take, for example, the relationship between transportation, weights and measures, and the French Revolution.

A country in which there is no efficient way to get from Point A to Point B has no need for a standardized set of weights and measures, as people in Community A will never encounter or contend with whatever system they are using over in Community B. Even if a king wanted to create a standard system, he would have difficulty enforcing it. Instead, each community tends to evolve a system that works well for its own needs. A community that grows bananas, for example, will come up with measures suitable to bananas, like the “bunch,” a community that deals in grain will invent the “bushel,” and a community that enumerates no goods, like the Piraha, will not bother with large quantities quantities.

(Diamonds are measured in “carats,” which have nothing to do with the orange vegetable, but instead are derived from the seeds of the carob tree, which apparently are small enough to be weighed against small stones.)

Since the French paid taxes, there was some demand for standardized weights and measures within each province–if your taxes are “one bushel of grain,” you want to make sure “bushel” is well defined so the local lord doesn’t suddenly define this year’s bushel as twice as big as last year’s bushel–and likewise, the lord doesn’t want this year’s bushel to be defined as half the size as last year’s.

But as roads improved and trade increased, people became concerned with making sure that a bushel of grain sold in Paris was the same as a bushel purchased in Nice, or that 5 carats of diamonds in Bordeaux was still 5 carats when you reached Cognac.

But the established local power of the local nobility made it very hard to change whatever measures people were using in each individual place. That is, the existing code made it hard to change to a more efficient code, probably because local lords were concerned the new measures would result in fewer taxes, and the local peasants concerned it would result in higher taxes.

Thus it was only with the decapitation of the Ancien Regime and wiping away of the privileges and prerogatives of the nobility that Revolutionary France established, as one of its few lasting reforms, a universal system of weights and measures that has come down to us today as the metric or SI system.

Now, speaking as an American who has been trained in both Metric and Imperial units, using multiple systems can be annoying, but is rarely deadly. On the scale of sub-optimal ideas, humans have invented far worse.

Quoting Richard Rhodes, The Making of the Atomic Bomb:

“The end result of the complex organization that was the efficient software of the Great War was the manufacture of corpses.

This essentially industrial operation was fantasized by the generals as a “strategy of attrition.” The British tried to kill Germans, the Germans tried to kill British and French and so on, a “strategy” so familiar by now that it almost sounds normal. It was not normal in Europe before 1914 and no one in authority expected it to evolve, despite the pioneering lessons of the American Civil War. Once the trenches were in place, the long grave already dug (John Masefield’s bitterly ironic phrase), then the war stalemated and death-making overwhelmed any rational response.

“The war machine,” concludes Elliot, “rooted in law, organization, production, movement, science, technical ingenuity, with its product of six thousand deaths a day over a period of 1,500 days, was the permanent and realistic factor, impervious to fantasy, only slightly altered by human variation.”

No human institution, Elliot stresses, was sufficiently strong to resist the death machine. A new mechanism, the tank, ended the stalemate.”

Russian Troops waiting for death

On the Eastern Front, the Death Machine was defeated by the Russian Revolution, as the canon fodder decided it didn’t want to be turned into corpses anymore.

I find World War I more interesting than WWII because it makes far less sense. The combatants in WWII had something resembling sensible goals, some chance of achieving their goals, and attempted to protect the lives of their own people. WWI, by contrast, has no such underlying logic, yet it happened anyway–proof that seemingly logical people can engage in the ultimate illogic, even as it reduces whole countries to nothing but death machines.

Why did some countries revolt against the cruel code of war, and others not? Perhaps an important factor is the perceived legitimacy of the government itself (though regular food shipments are probably just as critical.) Getting back to information theory, democracy itself is a kind of blockchain for establishing political legitimacy, (more on this in a couple of chapters) which may account for why some countries perceived their leadership as more legitimate, and other countries suddenly discovered, as information about other people’s opinions became easier to obtain, that the government enjoyed very little legitimacy.

But I am speculating, and have gotten totally off-topic (Auerswald was just discussing the establishment of TCP/IP protocols and other similar standards that aid international trade, not WWI!)

Returning to Auerswald, he cites a brilliant quote from Alfred North Whitehead

“Civilization advances by extending the number of operations we can perform without thinking about them.”

As we were saying, while sub-optimal (or suicidal) code can and does destroy human societies, good code can substantially increase human well-being.

The discovery and refinement of new inventions, technologies, production recipes, etc., involves a steep learning curve as people first figure out how to make the thing work and to source and put together all of the parts necessary to build it, (eg, the invention of the automobile in the late 1800s and early 1900s,) but once the technology spreads, it simply becomes part of the expected infrastructure of everyday life (eg, the building of interstate highways and gas stations, allowing people to drive cars all around the US,) a “platform” on which other, future innovations build. Post-1950, most automobile-driven innovation was located not in refinements to the engines or brakes, but in things you can do with vehicles, like long-distance shipping.

Interesting things happen to income as code becomes platforms, but I haven’t worked out all of the details.

Continuing with Auerswald:

Note that, in code economics, a given country’s level of “development” is not sensibly measured by the total monetary value of all goods and services it produces…. Rather, the development of a country consists of … it capacity to execute more complex code. …

Less-developed countries that lack the code to produce complex products will import them, and they will export simpler intermediate products and raw materials in order to pay for the required imports.

By creating and adhering to widely-observed “standards,” increasing numbers of countries (and people) are able to share inventions, code, and development.

Of the drivers of beneficial trade, international standards are at once among the most important and the least appreciated. … From the invention of bills of exchange in the Middle Ages … to the creation of twenty-first-century communications protocols, innovations in standards have lowered the cost and enhanced the value of exchange across distance. …

For entrepreneurs in developing countries, demonstrated conformity with international standards… is a universally recognized mark of organizational capacity that substantially eases entry into global production and distribution networks.

In other words, you are more likely to order steel from a foreign factory if you have some confidence that you will actually receive the variety you ordered, and the factory can signal that it knows what it is doing and will actually deliver the specified steel by adhering to international standards.

On the other hand, I think this can degenerate into a reliance on the appearance of doing things properly, which partially explains the Elizabeth Holmes affair. Holmes sounded like she knew what she was doing–she knew how to sound like she was running a successful startup because she’d been raised in the Silicon Valley startup culture. Meanwhile, the people investing in Holmes’s business didn’t know anything about blood testing (Holmes’s supposed invention tested blood)–they could only judge whether the company sounded like it was a real business.

Auerswald then has a fascinating section comparing each subsequent “platform” that builds on the previous “platform” to trophic levels in the environment. The development of each level allows for the development of another, more complex level above it–the top platform becomes the space where newest code is developed.

If goods and services are built on platforms, one atop the other, then it follows that learning at higher levels of the system should be faster than learning at lower levels, for the simple reason that leaning at higher levels benefits from incremental learning all the way down.

There are two “layers” of learning. Raw material extraction shows high volatility around a a gradually increasing trend, aka slow learning. By contrast, the delivery of services over existing infrastructure, like roads or wireless networks, show exponential growth, aka fast learning.

In other words, the more levels of code you already have established and standardized into platforms, the faster learning goes–the basic idea behind the singularity.

 

That’s all for today. See you next week!