Book Club: The Code Economy chs. 3-4: Machines and Computers

Machines are fascinating.

Imagine if we discovered, by some chance, in a previously unexplored niche of the world, a group of “people” who looked exactly like us, but had no technology at all: no fire, no pots, no textiles, not even an Acheulean hand axe. Let us further assume that they were not merely feral children who’d gotten lost in the woods, but an actual community that had sustained itself for generations, and that all attempts to introduce them to the art of tool-making failed. They could not, despite often watching others build fires or throw pots, make their own–much less understand and join in a modern economy. (A bulldog can learn to ride a skateboard, but it cannot learn to make a skateboard.)

What would we think of them? Would they be “human”? Even though they look like us, they could only live in houses and wear clothes if we gave those to them; if not given food, they would have to stay in their native environment and hunt.

It is hard to imagine a “human” without technology, nor explaining the course of human history without the advance of technology. The rise of trans-Atlantic discovery, trade, and migration that so fundamentally shaped the past 500 years cannot be explained without noting the development of superior European ships, maps, and navigational techniques necessary to make the journey. Why did Europe discover America and not the other way around? Many reasons, but fundamentally, because the Americans of 1492 didn’t have ships that could make the journey and the Europeans did.

The Romans were a surprisingly advanced society, technology wise, and even during the High Middle Ages, European technology continued to advance, as with the spread of wind and water mills. But I think it was the adoption of (Hindu-)Arabic numerals, popularized among mathematicians in the 1200s by Fibonacci, but only adopted more widely around the 1400s, that really allowed the Industrial and Information Revolutions to occur. (Roman numerals are just awful for any maths.)

An 18th century set of Napier’s Bones

From the Abacus and Fibonacci’s Liber Abaci, Napier developed his “bones,” the first European mechanical calculating machine, in 1617. These were followed by Pascal’s Calculator in 1642, the slide rule (early 1600s,) and Leibniz’s Stepped Reckoner in 1672. After that, progress on the adding machine problem was so rapid that it does not do to list all of the devices and prototypes devised, but we may mention Gaspard de Prony’s impressive logarithmic and trigonometric mathematical tables, for use by human “calculators”, and Babbage‘s analytical and difference machines. The Arithmometer, patented in 1820, was the first commercially successful mechanical calculator, used in many an office for nearly a century.

The history of mechanical computing devices wouldn’t be complete without reference to the parallel development of European clocks and watches, which pioneered the use of gears to translate movement into numbers, not to mention the development of an industry capable of manufacturing small, high-quality gears to reasonably high tolerances.

Given this context, I find our culture’s focus on Babbage–whose machines, while impressive, was never actually built–and his assistant Ada of Lovelace, a bit limited. Their contributions were interesting, but taken as a whole, the history is almost an avalanche of innovations.

Along the way, the computer has absorbed many technological innovations from outside computing–the Jacquard Loom pioneered the use of punch cards; the lightbulb pioneered the vacuum tubes that eventually filled Colossus and ENIAC.

But these early computers had a problem: vacuum tubes broke often.

During and immediately after World War II a phenomenon named “the tyranny of numbers” was noticed, that is, some computational devices reached a level of complexity at which the losses from failures and downtime exceeded the expected benefits.[2] Each Boeing B-29 (put into service in 1944) carried 300–1000 vacuum tubes and tens of thousands of passive components.[notes 4] The number of vacuum tubes reached thousands in advanced computers and more than 17,000 in the ENIAC (1946).[notes 5] Each additional component reduced the reliability of a device and lengthened the troubleshooting time.[2] Traditional electronics reached a deadlock and a further development of electronic devices required reducing the number of their components.


…the 1946 ENIAC, with over 17,000 tubes, had a tube failure (which took 15 minutes to locate) on average every two days. The quality of the tubes was a factor, and the diversion of skilled people during the Second World War lowered the general quality of tubes.[29]

The invention of the semiconductor further revolutionized computing–bringing us a long way from the abacus of yesterday.

Chapter 3 takes a break from the development of beautiful machines to examine their effects on humans. Auerswald writes:

By the twentieth century, the systematic approach to analyzing the division of labor that de Prony developed would have a name: management science. the first and foremost proponent of management science was Frederick Winslow Taylor, a child of privilege who found his calling in factories. …

This first experience of factory work gave Taylor an understanding of the habits of workers that was as intimate as it was, ultimately, unfavorable. Being highly organized and precise by nature, Taylor was appalled at the lax habits and absence of structure that characterized the early twentieth-century factory floor. … However, Taylor ultimately concluded that the blame did not lie with the workers but in the lack of rigorously considered management techniques. At the center of management, Taylor determined, was the capacity to precisely define the tasks of which a “job” was comprised.

What distinguished Taylor was his absolute conviction that worker could not be left on their own to define, much less refine, the tasks that comprised their work. He argued that authority must be fully vested in scientifically determined routine–that is to say, code.

Sounds hellish.

I know very little of management science beyond what can be found in Charlie Chaplin’s Modern Times. According to Wikipedia, Vladimir Lenin described Taylorism as a “‘scientific’ system of sweating” more work from laborers.[3] However, in Taylor’s defense, I don’t think the adoption of Taylorism ever resulted in the mass starvation of millions of people, so maybe Lenin should shut up was wrong. Further:

In the course of his empirical studies, Taylor examined various kinds of manual labor. For example, most bulk materials handling was manual at the time; material handling equipment as we know it today was mostly not developed yet. He looked at shoveling in the unloading of railroad cars full of ore; lifting and carrying in the moving of iron pigs at steel mills; the manual inspection of bearing balls; and others. He discovered many concepts that were not widely accepted at the time. For example, by observing workers, he decided that labor should include rest breaks so that the worker has time to recover from fatigue, either physical (as in shoveling or lifting) or mental (as in the ball inspection case). Workers were allowed to take more rests during work, and productivity increased as a result.[11]


By factoring processes into discrete, unambiguous units, scientific management laid the groundwork for automation and offshoring, prefiguring industrial process control and numerical control in the absence of any machines that could carry it out. Taylor and his followers did not foresee this at the time; in their world, it was humans that would execute the optimized processes. (For example, although in their era the instruction “open valve A whenever pressure gauge B reads over value X” would be carried out by a human, the fact that it had been reduced to an algorithmic component paved the way for a machine to be the agent.) However, one of the common threads between their world and ours is that the agents of execution need not be “smart” to execute their tasks. In the case of computers, they are not able (yet) to be “smart” (in that sense of the word); in the case of human workers under scientific management, they were often able but were not allowed. Once the time-and-motion men had completed their studies of a particular task, the workers had very little opportunity for further thinking, experimenting, or suggestion-making. They were forced to “play dumb” most of the time, which occasionally led to revolts.

While farming has its rhythms–the cows must be milked when the cows need to be milked, and not before or after; the crops must be harvested when they are ripe–much of the farmer’s day is left to his own discretion. Whether he wants to drive fence in the morning and hoe the peas in the afternoon, or attend to the peas first and the fence later is his own business. If he wants to take a nap or pause to fish during the heat of the day it is, again, his own business.

A factory can’t work like that. If the guy who is supposed to bolt the doors onto the cars can’t just wander off to eat a sandwich and use the restroom whenever he feels like it, nor can he decide that today he feels like installing windshields. Factories and offices allow many men to work together by limiting the scope of each one’s activities.

Is this algorithmisation of labor inhuman, and should we therefore welcome its mechanization and automation?


12 thoughts on “Book Club: The Code Economy chs. 3-4: Machines and Computers

  1. >Is this algorithmisation of labor inhuman, and should we therefore welcome its mechanization and automation?

    “As higher-level wholes emerge, they become organismal and their autonomy rises. And at the same time, the autonomy of their formerly autonomous parts falls. Their parts are transformed into machines. Molecular mechanisms within bacteria in the association that led to the eukaryotic cell; cells within multicellular associations that led to the first multicellular individuals; individuals within societies in the emergence of superorganisms—all these lose their suppleness, their flexibility. their capacity to deal with a wide range of environments. The driving force behind these losses was described earlier as selection on the Whole favoring the greater economy of using simpler parts, the advantages of streamlining. But in present terms, we can see that the process has other aspects. The reduction in complexity is doubtless favored by the advantages to the whole of using machine parts, the efficacy and reliability of those parts. In terms of autonomy, there is another advantage: autonomous parts are dangerous to the whole. Parts with too much autonomy, with a capacity to go their own way, pursuing their own advantage, can destroy a whole. Thus, the machinification of parts is also a defensive strategy. […] The degree to which the previous discussion applies to humans and human societies — the degree to which we as individuals have been machinified — is an open question, one that opens up a number of entertaining lines of argument.”

    Liked by 1 person

    • This really is the question, isn’t it? To merely live in a city, one’s will must be constantly sublimated. I want to cross the street–no wait, there are cars. I want to shout loudly–no, my neighbors will hear. I want to paint my house this shade of highway yellow I got a great deal on–no, the HOA won’t allow it.

      Even a lot of the social conflict between the right and left these days seems to boil down to people trying to figure out how to get along in closer quarters without stepping on each others’ toes and one side saying, “Hey, maybe we wouldn’t have to worry about toes so much if there were fewer people in here and they all held to the old toe-stepping code I already believe in.” Take the conflict over trans rights. There’s no way through without making someone unhappy.

      But on the other hand, “let’s have more people working in factories and fewer robots” really doesn’t sound like a good direction. More robots can do the things that robots are good at, and humans can do… what? Hopefully, things humans are good at.


      • Sometimes I wonder, despite my strong libertarian leanings, if it would make sense to pay people to work traditional farms and other kinds of “cosplay” traditional work… Mind you, I’m not some crazy “we can feed 7 billion people with organic farms!” type, rather, use the mega commercial farms to actually feed 7 billion people, then subsidize little farms for, I don’t know, mental health? At any rate, there seem to be a lot of people with problems that go away when they’re busy, say, raising sheep or caring for horses… Maybe post-singularity, most people can live some variation of a lifelong 4-H project or boy scout camp, or a historical reenactment village (all with modern medical care and sanitation), or go all out Amish… seems less dystopian than paying everyone to play video games 24-7…

        (And, back to my inner libertarian, how is this all paid for, exactly?)


      • It doesn’t seem any worse than paying them regular welfare, as long as people enjoy it. And I think there is a reasonable place in people’s diets for locally grown, low-tech food–we might be healthier that way.


  2. The transition to agriculture took centuries or millenia, but the transition through industrialization has taken a bare century. We really haven’t had the chance to let the Darwinian feedback system work its magic, and it shows in the fertility rates. My personal recommendation is for people to try to keep at least one foot in the old ways (a big garden) while doing their best in the new (politics, or coding, or however people can make a buck nowadays).

    What costs money now, anyway? Calories, mortgage, utilities, taxes? I try to imagine what I want for my progeny, and I’m certain I don’t want them going to college for most degrees, working corporate, or competing in an ever tightening rat race. Small communities with work done on the local level but translated into whatever the mechanized economy affords, I guess.

    Liked by 1 person

    • The biggest issues I see, cost-wise, are housing, education, and health care. Cities are where the jobs are, so people want to live in cities, which continually drives up the cost of real estate in cities, effectively canceling out as much of the increased wealth as possible. Health care seems destined to be expensive because when it comes down to your money or your life, you pick your life. Education, though, doesn’t need to be expensive. We ought to be able to sort that one out, and step one is letting smart kids start college during their “highschool” years and start working younger.

      Personally, I feel better, happier, more balanced, etc, after exercising or working in the garden. It’s hard to motivate myself to just run toward no particular destination or climb the stairs without reason, but I get something out of my time in the garden, so it feels rewarding (even if it’d be cheaper and easier to just buy flowers at the florist.)

      This afternoon, I took the kids out. It was beautiful. The temperature was perfect. The sunlight had this golden quality that made all of the little hills and valleys look beautiful–I’m waxing poetic about drainage ditches, that’s how lovely the light was. The clover nodded in the breeze and there are more berries dotting my yard than I can gather. We threw balls, played in the sandbox, rode bikes up and down the street…

      And we were the only people out there. The street was deserted.

      Where are the people? Where are the children? Is everyone just inside, watching TV? People would be better off hoeing a tiny patch of corn behind their houses than sitting on their butts missing out on summer!

      People need something to get them outside.


  3. “…Taylorism as a “‘scientific’ system of sweating” more work from laborers…”

    Taylor did a lot of studies on more efficient ways workers could do the jobs they had to do without necessarily worker the workers harder. Old saying,”Work smarter not harder”. True sometimes this meant simplifying the job so that it became more boring but was done easier.

    In maybe 20 years and certainly no later than 35 years work for the average IQ person will be gone completely. Look at this gif of computer power for $1,000 equal to a person and the time line for this to happen. It’s coming and can not be stopped in any way that I can see. It appears the evolution of Mankind was really just a step for the true rulers silicon. Look at 2018. It just zooms up. This is not crazy stuff either. Some computing power limits are slowing things down a little but there’s plenty of advances left to reach this level. We have 12 years.

    Liked by 1 person

    • My apologies for being so redundant I know I keep repeating myself on this. The reason is I think this is the most astounding thing to ever happen in the history of the planet. If you really get this in your head it’s stupefying and such a complete leap into the unknown that it’s like taking stone age people and flying them around in helicopters and showing them modem flat screen computer tablets. The difference in the next 20 years will make or break us as humans.

      Liked by 1 person

      • That’s a nice graphic.
        Truth is, we really don’t know how things will go.
        Maybe… it’ll all go well, and in the future, few of us will have to do shitty jobs like coal mining or factory work. Automating a large part of food production hasn’t led to everyone starving or losing their jobs, after all.
        But change at this rate is hard to even imagine, and here we might live it. Very strange indeed.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s