Book Club: The Code Economy chs. 3-4: Machines and Computers

Machines are fascinating.

Imagine if we discovered, by some chance, in a previously unexplored niche of the world, a group of “people” who looked exactly like us, but had no technology at all: no fire, no pots, no textiles, not even an Acheulean hand axe. Let us further assume that they were not merely feral children who’d gotten lost in the woods, but an actual community that had sustained itself for generations, and that all attempts to introduce them to the art of tool-making failed. They could not, despite often watching others build fires or throw pots, make their own–much less understand and join in a modern economy. (A bulldog can learn to ride a skateboard, but it cannot learn to make a skateboard.)

What would we think of them? Would they be “human”? Even though they look like us, they could only live in houses and wear clothes if we gave those to them; if not given food, they would have to stay in their native environment and hunt.

It is hard to imagine a “human” without technology, nor explaining the course of human history without the advance of technology. The rise of trans-Atlantic discovery, trade, and migration that so fundamentally shaped the past 500 years cannot be explained without noting the development of superior European ships, maps, and navigational techniques necessary to make the journey. Why did Europe discover America and not the other way around? Many reasons, but fundamentally, because the Americans of 1492 didn’t have ships that could make the journey and the Europeans did.

The Romans were a surprisingly advanced society, technology wise, and even during the High Middle Ages, European technology continued to advance, as with the spread of wind and water mills. But I think it was the adoption of (Hindu-)Arabic numerals, popularized among mathematicians in the 1200s by Fibonacci, but only adopted more widely around the 1400s, that really allowed the Industrial and Information Revolutions to occur. (Roman numerals are just awful for any maths.)

An 18th century set of Napier’s Bones

From the Abacus and Fibonacci’s Liber Abaci, Napier developed his “bones,” the first European mechanical calculating machine, in 1617. These were followed by Pascal’s Calculator in 1642, the slide rule (early 1600s,) and Leibniz’s Stepped Reckoner in 1672. After that, progress on the adding machine problem was so rapid that it does not do to list all of the devices and prototypes devised, but we may mention Gaspard de Prony’s impressive logarithmic and trigonometric mathematical tables, for use by human “calculators”, and Babbage‘s analytical and difference machines. The Arithmometer, patented in 1820, was the first commercially successful mechanical calculator, used in many an office for nearly a century.

The history of mechanical computing devices wouldn’t be complete without reference to the parallel development of European clocks and watches, which pioneered the use of gears to translate movement into numbers, not to mention the development of an industry capable of manufacturing small, high-quality gears to reasonably high tolerances.

Given this context, I find our culture’s focus on Babbage–whose machines, while impressive, was never actually built–and his assistant Ada of Lovelace, a bit limited. Their contributions were interesting, but taken as a whole, the history is almost an avalanche of innovations.

Along the way, the computer has absorbed many technological innovations from outside computing–the Jacquard Loom pioneered the use of punch cards; the lightbulb pioneered the vacuum tubes that eventually filled Colossus and ENIAC.

But these early computers had a problem: vacuum tubes broke often.

During and immediately after World War II a phenomenon named “the tyranny of numbers” was noticed, that is, some computational devices reached a level of complexity at which the losses from failures and downtime exceeded the expected benefits.[2] Each Boeing B-29 (put into service in 1944) carried 300–1000 vacuum tubes and tens of thousands of passive components.[notes 4] The number of vacuum tubes reached thousands in advanced computers and more than 17,000 in the ENIAC (1946).[notes 5] Each additional component reduced the reliability of a device and lengthened the troubleshooting time.[2] Traditional electronics reached a deadlock and a further development of electronic devices required reducing the number of their components.

Also:

…the 1946 ENIAC, with over 17,000 tubes, had a tube failure (which took 15 minutes to locate) on average every two days. The quality of the tubes was a factor, and the diversion of skilled people during the Second World War lowered the general quality of tubes.[29]

The invention of the semiconductor further revolutionized computing–bringing us a long way from the abacus of yesterday.

Chapter 3 takes a break from the development of beautiful machines to examine their effects on humans. Auerswald writes:

By the twentieth century, the systematic approach to analyzing the division of labor that de Prony developed would have a name: management science. the first and foremost proponent of management science was Frederick Winslow Taylor, a child of privilege who found his calling in factories. …

This first experience of factory work gave Taylor an understanding of the habits of workers that was as intimate as it was, ultimately, unfavorable. Being highly organized and precise by nature, Taylor was appalled at the lax habits and absence of structure that characterized the early twentieth-century factory floor. … However, Taylor ultimately concluded that the blame did not lie with the workers but in the lack of rigorously considered management techniques. At the center of management, Taylor determined, was the capacity to precisely define the tasks of which a “job” was comprised.

What distinguished Taylor was his absolute conviction that worker could not be left on their own to define, much less refine, the tasks that comprised their work. He argued that authority must be fully vested in scientifically determined routine–that is to say, code.

Sounds hellish.

I know very little of management science beyond what can be found in Charlie Chaplin’s Modern Times. According to Wikipedia, Vladimir Lenin described Taylorism as a “‘scientific’ system of sweating” more work from laborers.[3] However, in Taylor’s defense, I don’t think the adoption of Taylorism ever resulted in the mass starvation of millions of people, so maybe Lenin should shut up was wrong. Further:

In the course of his empirical studies, Taylor examined various kinds of manual labor. For example, most bulk materials handling was manual at the time; material handling equipment as we know it today was mostly not developed yet. He looked at shoveling in the unloading of railroad cars full of ore; lifting and carrying in the moving of iron pigs at steel mills; the manual inspection of bearing balls; and others. He discovered many concepts that were not widely accepted at the time. For example, by observing workers, he decided that labor should include rest breaks so that the worker has time to recover from fatigue, either physical (as in shoveling or lifting) or mental (as in the ball inspection case). Workers were allowed to take more rests during work, and productivity increased as a result.[11]

Also:

By factoring processes into discrete, unambiguous units, scientific management laid the groundwork for automation and offshoring, prefiguring industrial process control and numerical control in the absence of any machines that could carry it out. Taylor and his followers did not foresee this at the time; in their world, it was humans that would execute the optimized processes. (For example, although in their era the instruction “open valve A whenever pressure gauge B reads over value X” would be carried out by a human, the fact that it had been reduced to an algorithmic component paved the way for a machine to be the agent.) However, one of the common threads between their world and ours is that the agents of execution need not be “smart” to execute their tasks. In the case of computers, they are not able (yet) to be “smart” (in that sense of the word); in the case of human workers under scientific management, they were often able but were not allowed. Once the time-and-motion men had completed their studies of a particular task, the workers had very little opportunity for further thinking, experimenting, or suggestion-making. They were forced to “play dumb” most of the time, which occasionally led to revolts.

While farming has its rhythms–the cows must be milked when the cows need to be milked, and not before or after; the crops must be harvested when they are ripe–much of the farmer’s day is left to his own discretion. Whether he wants to drive fence in the morning and hoe the peas in the afternoon, or attend to the peas first and the fence later is his own business. If he wants to take a nap or pause to fish during the heat of the day it is, again, his own business.

A factory can’t work like that. If the guy who is supposed to bolt the doors onto the cars can’t just wander off to eat a sandwich and use the restroom whenever he feels like it, nor can he decide that today he feels like installing windshields. Factories and offices allow many men to work together by limiting the scope of each one’s activities.

Is this algorithmisation of labor inhuman, and should we therefore welcome its mechanization and automation?