Industrial Society and its Future

There goes the Oxygen All right. It took a while, but I have finished reading Ted’s manifesto, Industrial Society and its Future. In case you are unfamiliar with the story, Ted Kaczynski was a precocious math prodigy who was born in 1942 and matriculated at Harvard in 1958, at the age of 16. He went on to earn his PhD in math at Michigan, and in 1967 became UC Berkeley’s youngest ever assistant math professor (up to that point). By 1969, Kaczynski had clearly had enough of the Berkeley hippies and retreated to a cabin in the woods, where, he claims, he intended to live the simple life in peace. Unfortunately, one day he found that someone had built a road through his favorite hiking spot, so he began a terrorist campaign of mailing letter bombs to semi-random people, most of them professors or involved in transportation/technology. (Three people died.)

This resulted in a very long and expensive FBI manhunt that ended when Ted agreed to cease his bombing campaign if the Washington Post printed his manifesto, Industrial Society and its Future. Kaczynski’s brother recognized his writing style in the essay and turned him over to the FBI; Ted is still alive, in prison.

It is unfortunate when the author of a work commits clearly reprehensible or evil acts (like killing people). For all that we attempt not to fall into ad hominens, “Do I trust the author or does he seem like a crazy guy?” remains a reasonable first-pass mechanism for sorting through the nigh-infinite quantity of potential reading material. In this case, we must simply admit up front that the author was terrorist and murderer, then proceed to analyze his ideas as though he didn’t exist. Death of the author, indeed.

Quick overview:

Industrial Society and its Future is 35,000 words long, or the length of a novella. It is long enough to feel long when reading it, but too short to include the kind of explanatory examples and details that it could really use. (My summary will therefore draw, when necessary, from other things I have read.)

You have likely already encountered most of the concepts in Ted’s essay, either independently or because you’ve talked to people who’ve read it; the concepts are very commonly discussed on the alt-right.

Ted asserts that modern technology is making people miserable because:

1. It provides for our basic needs (ie food and shelter) with relative ease, leaving us unable to fulfill our instinctual drive to provide for our basic needs, which leaves us unhappy.

2. It makes us follow lots of rules (like “only sell pasteurized milk” or “get a driver’s license”). These rules are necessary for running advanced technology in densely populated areas, but frustrating because they significantly curtail our freedom.

For example:

A walking man formerly could go where he pleased, go at his own pace without observing any traffic regulations, and was independent of technological support-systems. When motor vehicles were introduced they appeared to increase man’s freedom. They took no freedom away from the walking man, no one had to have an automobile if he didn’t want one … But the introduction of motorized transport soon changed society in such a way as to restrict greatly man’s freedom of locomotion. When automobiles became numerous, it became necessary to regulate their use extensively. In a car, especially in densely populated areas, one cannot just go where one likes at one’s own pace one’s movement is governed by the flow of traffic and by various traffic laws. One is tied down by various obligations: license requirements, driver test, renewing registration, insurance, maintenance required for safety, monthly payments on purchase price. Moreover, the use of motorized transport is no longer optional. Since the introduction of motorized transport the arrangement of our cities has changed in such a way that the majority of people no longer live within walking distance of their place of employment, shopping areas and recreational opportunities, so that they HAVE TO depend on the automobile for transportation. Or else they must use public transportation, in which case they have even less control over their own movement than when driving a car. Even the walker’s freedom is now greatly restricted. In the city he continually has to stop to wait for traffic lights that are designed mainly to serve auto traffic. …

To be fair, when talking about the miseries created by technology, I think we can also include things like “people incinerated by bombs during WWII” and “People whose lives were made miserable by totalitarian Soviet states,” not just people struggling to cross the street because there are too many cars.

Ted believes that this misery is bad enough that we would be happier and better off without modern technology (aside from, obviously, everyone who would die without it,) and therefore we should get rid of it.

This is, unfortunately, the essay’s weak point. Most people who read it probably say, “Yes, modern civilization has its issues, yes, cars pollute and traffic is annoying and I hate paperwork, but it sure beats getting mauled to death by lions.”

To be fair, there’s not a whole lot of research out there about what makes people happy. (I did find some; the researchers concluded that people are happy when they have friends.) Personally, I’ve only ever lived in today’s society, so I struggle to compare it to society of 200 years ago.

But let’s suppose we accept Kaczynski’s thesis and decided that modern life is making people really miserable. We can’t just say, “Okay, we’re Luddites, now. Lets put some clogs in this machine.” The system won’t let you. The system is a lot bigger and stronger than you.

Ted advocates rebellion in the essay, but later he noted that realistically, there won’t be a mass movement of people willing to give up their TVs, so if you want to do something about industrial system, you have to go the opposite direction: make the system worse until everyone is so stressed and miserable that they all snap and the system breaks.

Much like Marx, this is where the essay falters, because the technological system shows no sign of completely breaking down. Even if the US collapses, China will just happily scoop up the pieces and chug right along.

(I find it somewhat amusingly that Ted is essentially using a Marxist approach in his claim that the needs of the society’s economic system dictate the shape and culture of that society.)

A few things are incorrect, (Ted is weak on what life is actually like in primitive societies–clearly he has never lived in one–for example, his claim that crime was lower in their societies than ours. It wasn’t,) but the general thrust is accurate or at least an interesting position that a reasonable person could argue in good faith. The essay is quite interesting in its description of the “power process”:

Human beings have a need (probably based in biology) for something that we will call the “power process.” This is closely related to the need for power (which is widely recognized) but is not quite the same thing. The power process has four elements. The three most clear-cut of these we call goal, effort and attainment of goal. (Everyone needs to have goals whose attainment requires effort, and needs to succeed in attaining at least some of his goals.) The fourth element is more difficult to define and may not be necessary for everyone. We call it autonomy and will discuss it later (paragraphs 42-44).

34. Consider the hypothetical case of a man who can have anything he wants just by wishing for it. Such a man has power, but he will develop serious psychological problems. At first he will have a lot of fun, but by and by he will become acutely bored and demoralized. Eventually he may become clinically depressed. …

35. Everyone has goals; if nothing else, to obtain the physical necessities of life: food, water and whatever clothing and shelter are made necessary by the climate. …

36. Nonattainment of important goals results in death if the goals are physical necessities, and in frustration if nonattainment of the goals is compatible with survival. Consistent failure to attain goals throughout life results in defeatism, low self-esteem or depression.

37, Thus, in order to avoid serious psychological problems, a human being needs goals whose attainment requires effort, and he must have a reasonable rate of success in attaining his goals.

This is all well and good until society gets so good at making food that, poof, the majority of people can no longer actually struggle and attain meaningful goals.

People who do not have real goals to give themselves a sense of accomplishment and satisfaction try to fill the void in their lives with “surrogate activities,” which are basically everything else people do.

I think it is fair to say that modern people do have a lot of surrogate activities, and some of them are pretty embarrassing. Women sometimes become obsessed with dolls and start treating them like real children (eg, “momalorians;”) men become obsessed with movies/ video games in which they pretend to be heroes; and pretty much everyone on the internet thinks that they have something very important to say about politics.

It’s hard to escape the sense that many people obsess about such things because otherwise they would have nothing to say to each other; they don’t derive meaning from their jobs or daily lives, or if they do, nothing that happens to them would make sense to the other people they talk to. At least if I reference Harry Potter, you’ll know what I’m talking about.

That all said, Ted misses one significant way people can still struggle and achieve meaningful goals: by having children. Obviously Ted never had kids of his own, nor did most of the people he knew at university, which is probably why he doesn’t address this in his essay. Nevertheless, having and raising kids is right up there with acquiring food and shelter in the list of basic human drives; evolution guarantees it. And kids, unlike food, are not being mass produced by machines. Raising children is still difficult and, yes, ultimately satisfying.

If raising one child is too simple and doesn’t provide enough difficulty to struggle and overcome, have some more. By kid three or four, you’ll be feeling that sweet, life-enhancing exhilaration of fleeing from an angry tiger. Or you’ll be really tired. No guarantees.

Ted’s next interesting concept is “oversocialization”:

24. Psychologists use the term “socialization” to designate the process by which children are trained to think and act as society demands. A person is said to be well socialized if he believes in and obeys the moral code of his society and fits in well as a functioning part of that society. It may seem senseless to say that many leftists are oversocialized, since the leftist is perceived as a rebel. Nevertheless, the position can be defended. Many leftists are not such rebels as they seem.

25. The moral code of our society is so demanding that no one can think, feel and act in a completely moral way. For example, we are not supposed to hate anyone, yet almost everyone hates somebody at some time or other, whether he admits it to himself or not. Some people are so highly socialized that the attempt to think, feel and act morally imposes a severe burden on them. In order to avoid feelings of guilt, they continually have to deceive themselves about their own motives and find moral explanations for feelings and actions that in reality have a non-moral origin. We use the term “oversocialized” to describe such people. [2]

26. Oversocialization can lead to low self-esteem, a sense of powerlessness, defeatism, guilt, etc. One of the most important means by which our society socializes children is by making them feel ashamed of behavior or speech that is contrary to society’s expectations. If this is overdone, or if a particular child is especially susceptible to such feelings, he ends by feeling ashamed of HIMSELF. Moreover the thought and the behavior of the oversocialized person are more restricted by society’s expectations than are those of the lightly socialized person. The majority of people engage in a significant amount of naughty behavior. They lie, they commit petty thefts, they break traffic laws, they goof off at work, they hate someone, they say spiteful things or they use some underhanded trick to get ahead of the other guy. The oversocialized person cannot do these things, or if he does do them he generates in himself a sense of shame and self-hatred. The oversocialized person cannot even experience, without guilt, thoughts or feelings that are contrary to the accepted morality; he cannot think “unclean” thoughts. And socialization is not just a matter of morality; we are socialized to conform to many norms of behavior that do not fall under the heading of morality. Thus the oversocialized person is kept on a psychological leash and spends his life running on rails that society has laid down for him. In many oversocialized people this results in a sense of constraint and powerlessness that can be a severe hardship. We suggest that oversocialization is among the more serious cruelties that human beings inflict on one another.

I had heard of Ted’s concept of “oversocialization” before reading the essay, but I thought it referred to people who had too many friends, socialized too much, and consequently became too obsessed with what other people think/obsessed with their reputation in other people’s minds.

On the contrary, Ted is taking a rather blank-slate approach to human nature and claiming that the “oversocialized” are people who have been molded by society to be overly restricted in their moral and personal behavior (because it is useful for the system if they act this way). This is “socialized” in the same vein as “sex is a social construct;” like the claim that primitive peoples had lower crime rates than ourselves, Ted at times espouses leftist ideological bits without necessarily realizing it.

Of course people do live in societies and are shaped and molded by them in various ways, but I know many “oversocialized” people, and at least some of them were born that way. Perhaps in a different society that basic tendency of theirs to believe that they are sinners would have been discouraged, but there is still that basic tendency in them; had they been the sorts of people who by nature rebel against authority, our society would give them a great deal to rebel against.

Our society does set the bounds and limits for most people’s morals, however. Our current notion that racism is a great evil, for example, was not shared by our ancestors of two centuries ago.

I’d like to pause and quote Zero HP Lovecraft:

Quoting Zero HP Lovecraft:

Foucault taught that power does not inhere in individuals, but in networks of people, that it is manifest between everyone and everyone else at all times, that it cannot be possessed, only enacted, and that it coerces by manufacturing “truth”

“Truth is a thing of this world: it is produced by constraint. Each society has its regime of truth: the types of discourse it accepts; the mechanisms which enable one to distinguish true and false statements, the means by which each is sanctioned”

Power is induced by “truth”, which is contingent and socially constructed. This makes conservatives bristle, because they rightly know that there is an immutable reality, but they refuse to understand how much flexion their own minds have with regard to the absolute

The dissident right breaks from the “mainstream” right precisely when realizes, along with Foucault, that “truth is not the privilege of those who have liberated themselves.” Moldbug’s famous dictum is “The sovereign determines the null hypothesis” …

Power is decentralized. If a single node in the knowledge/power nexus flips, the cathedral treats it as damage and routes around it. If a Harvard dean or NYT editor goes rogue, they get ignored or ejected.

Everyone knows more or less what power expressed through truth demands. We can sense it; we know the magic words we can say to give orders to others. “That makes me uncomfortable.” “That’s hateful.” “That could offend some people”. The words sound innocent but they aren’t

If you challenge a person who is enacting power, they can escalate. Your nearest authority knows the “truth”, and will side with power. If he doesn’t, his superior will, or his, and so on. In rare cases, these things go to court, where truth is constituted as law and precedent …

Power is the source of social discipline and conformity. To challenge power is not a matter of seeking some ‘absolute truth’, but of detaching the power of truth from the forms of social, economic, and cultural hegemony within which it operates

In some ways, Foucault’s ideas are quite reactionary, and he drew criticism from his leftist colleagues, because his ideas, taken to their logical conclusion, undermine the idea that any kind of “emancipation” is even possible. This is undeniably true.

(There is no such thing as emancipation. Living in society is submitting to social control. Living away from society is submitting to nature’s control. Nature is a harsher master than society.)

Similarly, while living in a technological society necessitates giving up a certain amount of freedom, it also gives a certain amount of freedom. Certainly there are far more career options. Your ancestors were dirt farmers and if they didn’t want to be dirt farmers, well, they could sign up to be sailors and die of scurvy. Slavery, serfdom, and indentured servitude were widespread, child brides were common in many parts of the world, and many people effectively had no one to protect them from abuse. Today has problems, but so did the past.

In many cases, people did not willingly join the industrialized world, but instead had to be dragged kicking and screaming into it–for example, the Inclosure Acts in England and Wales forced over 200,000 farmers off their land and into the cities in the late 1700s and early 1800s, where they became the early proletarian working class of the Industrial Revolution. For many of these people the process was an absolute disaster as rates of death and disease soared. To quote Spartacus Educational:

In 1750 around a fifth of the population [of Britain] lived in towns of more than 5,000 inhabitants; by 1850 around three-fifths did. This caused serious health problems for working-class people. In 1840, 57% of the working-class children of Manchester died before their fifth birthday, compared with 32% in rural districts. (17) Whereas a farm labourer in Rutland had a life-expectancy of 38, a factory worker in Liverpool had an average age of death of 15. (18)

I have seen similar numbers elsewhere. It was really bad, mostly because most houses in Britain did not have running water or sewers at the time. Poop was either thrown into the rivers (which were most people’s only water sources) or simply piled up until someone came and carted it away. And this was the era of horse-drawn carriages, which meant cities were also full of horse poop. For example, in 1880, there were at least 150,000 horses in New York City:

At a rate of 22 pounds per horse per day, equine manure added up to millions of pounds each day and over a 100,000 tons per year (not to mention around 10 million gallons of urine).

It smelled bad, to say the least. The introduction of the automobile was actually heralded for “polluting” less than horses.

But on the other hand, many people were quite happy to go to the cities. The US had both a wide open frontier for aspiring farmers and nothing like Britain’s Inclosure acts, yet still industrialized nonetheless. Presumably most of the people who moved to American cities in search of factory jobs did so voluntarily, like the Lowell Mill Girls:

The Lowell mill girls were young female workers who came to work in industrial corporations in Lowell, Massachusetts, during the Industrial Revolution in the United States. The workers initially recruited by the corporations were daughters of propertied New England farmers, typically between the ages of 15 and 35.[1] By 1840, the height of the Industrial Revolution, the Lowell textile mills had recruited over 8,000 workers, mostly women, who came to make up nearly three-quarters of the mill workforce…

During the early period, women came to the mills of their own accord, for various reasons: to help a brother pay for college, for the educational opportunities offered in Lowell, or to earn supplementary income. Francis Cabot Lowell specifically emphasized the importance of providing housing and a form of education to mirror the boarding schools that were emerging in the 19th century. He also wanted to provide an environment that sharply contrasted the poor conditions of the British Mills. While their wages were only half of what men were paid, many women able to attain economic independence for the first time…

Similarly, we can cite the Great Migration of African Americans from the agricultural US South to Northern manufacturing cities, and millions of people in third world countries who have left their farms behind in favor of factory work. If the switch left them significantly unhappier, we’d expect to see many of them move back (though it is true that many a labor union strike has expressed deep dissatisfaction with factory systems).

At this point, it seems that problems like “no sanitation” have been solved and mos people in the world are enjoying significantly higher standards of living than ever before.

But let’s get back to Ted, because I’ve gotten very far from the oversocialized:

27. We argue that a very important and influential segment of the modern left is oversocialized and that their oversocialization is of great importance in determining the direction of modern leftism. Leftists of the oversocialized type tend to be intellectuals or members of the upper-middle class. Notice that university intellectuals [3] constitute the most highly socialized segment of our society and also the most left-wing segment.

28. The leftist of the oversocialized type tries to get off his psychological leash and assert his autonomy by rebelling. But usually he is not strong enough to rebel against the most basic values of society. Generally speaking, the goals of today’s leftists are NOT in conflict with the accepted morality. On the contrary, the left takes an accepted moral principle, adopts it as its own, and then accuses mainstream society of violating that principle.

This is, of course, exactly what we see right now, with middle and upper-class white liberals demanding that the government do more to enforce the views of middle and upper class white liberals by rioting in the streets and tearing down statues.

Let’s look a bit at the restriction of freedom:

114. As explained in paragraphs 65-67, 70-73, modern man is strapped down by a network of rules and regulations, and his fate depends on the actions of persons remote from him whose decisions he cannot influence. This is not accidental or a result of the arbitrariness of arrogant bureaucrats. It is necessary and inevitable in any technologically advanced society. The system HAS TO regulate human behavior closely in order to function. At work people have to do what they are told to do, otherwise production would be thrown into chaos. Bureaucracies HAVE TO be run according to rigid rules. To allow any substantial personal discretion to lower-level bureaucrats would disrupt the system and lead to charges of unfairness due to differences in the way individual bureaucrats exercised their discretion. It is true that some restrictions on our freedom could be eliminated, but GENERALLY SPEAKING the regulation of our lives by large organizations is necessary for the functioning of industrial-technological society. The result is a sense of powerlessness on the part of the average person. It may be, however, that formal regulations will tend increasingly to be replaced by psychological tools that make us want to do what the system requires of us. (Propaganda [14], educational techniques, “mental health” programs, etc.)

115. The system HAS TO force people to behave in ways that are increasingly remote from the natural pattern of human behavior. For example, the system needs scientists, mathematicians and engineers. It can’t function without them. So heavy pressure is put on children to excel in these fields. It isn’t natural for an adolescent human being to spend the bulk of his time sitting at a desk absorbed in study. A normal adolescent wants to spend his time in active contact with the real world. Among primitive peoples the things that children are trained to do tend to be in reasonable harmony with natural human impulses. Among the American Indians, for example, boys were trained in active outdoor pursuits—

just the sort of thing that boys like. But in our society children are pushed into studying technical subjects, which most do grudgingly. …

117. In any technologically advanced society the individual’s fate MUST depend on decisions that he personally cannot influence to any great extent. A technological society cannot be broken down into small, autonomous communities, because production depends on the cooperation of very large numbers of people and machines. Such a society MUST be highly organized and decisions HAVE TO be made that affect very large numbers of people. When a decision affects, say, a million people, then each of the affected individuals has, on the average, only a one-millionth share in making the decision. What usually happens in practice is that decisions are made by public officials or corporation executives, or by technical specialists, but even when the public votes on a decision the number of voters ordinarily is too large for the vote of any one individual to be significant. [17] Thus most individuals are unable to influence measurably the major decisions that affect their lives. There is no conceivable way to remedy this in a technologically advanced society. The system tries to “solve” this problem by using propaganda to make people WANT the decisions that have been made for them, but even if this “solution” were completely successful in making people feel better, it would be demeaning. …

119. The system does not and cannot exist to satisfy human needs. Instead, it is human behavior that has to be modified to fit the needs of the system. This has nothing to do with the political or social ideology that may pretend to guide the technological system. It is the fault of technology, because the system is guided not by ideology but by technical necessity. [18] … But the system, for good, solid, practical reasons, must exert constant pressure on people to mold their behavior to the needs of the system. … Need more technical personnel? A chorus of voices exhorts kids to study science. No one stops to ask whether it is inhumane to force adolescents to spend the bulk of their time studying subjects most of them hate. When skilled workers are put out of a job by technical advances and have to undergo “retraining,” no one asks whether it is humiliating for them to be pushed around in this way. It is simply taken for granted that everyone must bow to technical necessity. and for good reason: If human needs were put before technical necessity there would be economic problems, unemployment, shortages or worse. The concept of “mental health” in our society is defined largely by the extent to which an individual behaves in accord with the needs of the system and does so without showing signs of stress.

I would like to note a quick objection, that while this is true to some extent, it is also true that mental illness is a real thing that makes people suffer.

Ted is concerned, of course, that all of this making people conform to the needs of the technological system is inhuman and cruel and transforms people into ants.

Finally, we have the question of what happens to ordinary people when technology advances to the point that the jobs they used to do become obsolete.

I’ve been worrying about the “Robot Economy,” as I dubbed it, for about a decade and a half (not as long as Ted, but I’m not as old as he is.) What happens when machines get so good at doing your job that it’s not longer useful to employ you? I treated this subject at length a year or two ago in my review of Auerswald’s The Code Economy, but here is the short version:

So far, the results have been mixed. Losing your job is painful. Entire industries ceasing to employ people is even more painful, as people also lose all of the time and expense they spent to learn how to do those jobs. Retraining massive numbers of people is not easy and sometimes simply not doable. In the short term, at least, economic disruption is pretty bad.

On the long term, though, humans have so far coped with the disappearance of many professions by simply inventing new ones. Back in the 1800s, about 90% of people were farmers. The invention of the tractor rendered most farmers obsolete; one man could now do the work of many. Today, less than 2% of Americans are farmers.

But this massive shift in employment did not result in 88% of Americans being permanently out of work. 88% of us did not have to go on welfare, nor did we starve. People just do new jobs that we didn’t have back in the 1800s.

If technology keeps advancing (as I think it will) and keeps displacing people from their current jobs, we will not necessarily end up with an enormous lumpenproletariat underclass that is doomed to destitution. Certainly there will be painful periods, but in the end, people will probably just get new jobs (and the more we replace repetitive, physically demanding work with robots, the more pleasant I think those new jobs will be).

So this is a bit of a white pill to Kaczynski’s black: while I don’t think things are going to be smooth, and I certainly don’t have any reason to think that America will continue to economically and technologically dominate the world, and I do agree that modern society has a lot of problems, (many of which Kaczynski accurately describes,) I don’t think the world in general is doomed.

That said, you can’t destroy the system. It’s not going to collapse any time soon, though dysgenics could eventually do it in. In the meanwhile, you can join the Amish, if you want. You can move to New York, if that’s your thing. (I can’t imagine wanting to live in NYC given current circumstances, but clearly some people like it there.) Most people will make a few compromises, deal with the inconveniences, and find something worth living for–usually their children.

Advertisement

Bluing Meat

Inspired by a question from Littlefoot, I went out to do a little sleuthing:

I remember an anthropology professor reminiscing about buying food at open air markets somewhere in Africa, where refrigeration is non-existent the meat is simply out in the heat and “somehow everyone doesn’t die.” It’s a bit strange to us, because we’re inundated with messages that improper food handling will lead to the growth of horrible bacteria and death (I even refrigerate the eggs and butter, even though our grandmothers never did and the French still don’t,) but our ancestors not only managed without refrigeration, sometimes they actually tried to make the meat rot on purpose.

Helpful Twitter user Stefan Beldie explains that traditionally, pheasants were killed, eviscerated, and then hung for 4-10 days, depending on the weather.

The phrase “bluing meat” is extremely rare these days (try googling it,) but extra-rare or uncooked meat is still referred to as “blue”:

Temperatures for beef, veal, lamb steaks and roasts
Extra-rare or Blue (bleu) very red 46–49 °C 115–125 °F

Of course, there is a bit of difference between food that is merely uncooked/barely cooked, and food that has been intentionally allowed to rot.

Here’s the tale of an Inuit (Eskimo) delicacy, walrus meat that has been allowed to decompose in a hole in the ground for a year (though I suspect not much decomposition happens for about half the year up in the arctic).

Before you judge, remember that cheese is really just rotten vomit.

Have you ever heard the story that early modern Brits used a bunch of spices on their meat to cover up the taste of rot?

It turns out that this is a myth, a tall tale created by people misunderstanding cookbooks that gave instructions for properly rotting meat before eating it:

One of the most pervasive myths about medieval food is that medieval cooks used lots of spices to cover up the taste of rotten meat. This belief is often presented in the popular media as fact, with no cited references. Occasionally though a source is mentioned, and the trail invariably leads to:

The Englishman’s Food: Five Centuries of English Diet
J.C. Drummond, Anne Wilbraham
First published by Jonathan Cape Ltd 1939

Drummond claimed,

… It is not surprising to find that the recipe books of these times give numerous suggestions for making tainted meat edible. Washing with vinegar was an obvious, and one of the commonest procedures. A somewhat startling piece of advice is given in the curious collection of recipes and miscellaneous information published under the title of The Jewell House of Art and Nature by ‘Hugh Platt, of Lincolnes Inne Gentleman’ in 1594. If you had venison that was ‘greene’ you were recommended to ‘cut out all the bones, and bury [it] in a thin olde coarse cloth a yard deep in the ground for 12 or 20 houres’. It would then, he asserted, ‘bee sweet enough to be eaten’.” 

As Daniel Myers notes, washing with vinegar was not done to reduce spoilage, but to tenderize and get rid of the “gamey” taste of some meats. As for burying your meat to make it less spoiled, this is clearly absurd:

The example that Drummond does give is most certainly not for dealing with spoiled meat. He misinterprets the word “greene” to mean spoiled, when in fact it has the exact opposite meaning – unripe. Venison, along with a number of other meats, is traditionally hung to age for two or three days after butchering to help tenderize it and to improve the flavor. With this simple knowledge in mind, Platt’s instructions are clearly a way to take a freshly butchered carcass and speed up the aging process so that it may be eaten sooner.

Similar instructions for rapidly aging poultry can be found in Ménagier de Paris.

Item, to age capons and hens, you should bleed them through their beaks and immediately put them in a pail of very cold water, holding them all the way under, and they will be aged that same day as if they had been killed and hung two days ago.

The goal of these recipes is not to cover up rot, but to speed up the rotting (or “aging”) process.

Myers also notes that the idea of putting spices on rotten meat is also absurd because spices were horribly expensive–often worth their weight in gold. It would be rather like someone looking at a gold-leaf wrapped caviar and concluding the gold was there to distract the peasants from the fact that fish eggs are disgusting. You would have completely misread the dish. In the Medieval case, it would be cheaper to buy fresh meat than to dump spices on it.

Now, to be clear, what I’ve been calling “rot” is really more “aging.” We only think of it as rotting because we are accustomed to throwing everything in the refrigerator as soon as we get it.

As the Omaha World Herald explains:

Three factors affect the tenderness of meat in all animals, whether it be beef cattle or pheasant: background toughness, rigor mortis and aging the meat.

Background toughness results from the amount of collagen (connective tissue) in and between muscle fibers. The amount of collagen, as well as the interconnectivity of the collagen, increases as animals get older, explaining why an old rooster is naturally tougher than a young bird. Rigor mortis is the partial contracting and tightening of muscle fibers in animals after death and results from chemical changes in the muscle cells. Depending on temperature and other factors, rigor mortis typically sets in a few hours after death and maximum muscle contraction is reached 12 to 24 hours after death. Rigor mortis then begins to subside, which is when the aging (tenderization) of the meat begins.

Tenderization results from pH changes in the muscle cells after death that allow naturally occurring proteinase enzymes in cells to become active. These enzymes break down collagen, resulting in more tender meat. In beef cattle, the aging process will continue at a constant rate up to 14 days, as long as the meat is held at a proper and consistent temperature, and then decreases after that. In fowl, the rate of tenderization begins to decline after a few days.

A common misconception is that bacteria-caused rotting is responsible for meat tenderization, and this is why many find the thought of aging game repugnant. … Maintaining a constant, cool temperature is key to preventing bacterial growth when aging meats. The sickness causing E. coli bacteria grows rapidly at temperatures at or above 60 F, but very slowly at 50 F.

It’s a very good article; RTWT.

From Littlefoot we have an article that delves into the technical side of things: Microbiological Changes in the Uneviscerated Bird Hung at 10 degrees C with Particular Reference to the Pheasant:

Several interesting results from this study. They hung up both pheasants and chickens. The pheasants showed very little microbial first two weeks, whereas the chickens started turning green on day five. This is probably a result of chickens having more bacteria in them to start with, a side effect of the crowded, disease-ridden conditions chickens are typically raised in.

A taste testing panel found that pheasants that had hung for at least three days tasted better than ones that had not, with some panel members preferring birds that had aged considerably longer.

So if you plan on hunting pheasant any time soon, consider letting it age for a few days before eating it–carefully, of course. Don’t give yourself food poisoning.

Why do people claim that whites “have no culture”?

A lot of culture–aside from that time your parents dragged you to the ballet–is what we would, in honest moments, classify as “stupid things people used to do/believe.”

Now, yes, I know, it’s a bit outre for an anthropologist to declare that large swathes of culture are “stupid,” but I could easily assemble a list of hundreds of stupid things, eg:

The Aztecs practiced human sacrifice because they believed that if they didn’t, the world would come to an end.

In Britain, people used to believe that you could literally eat the sins of a recently deceased person, speeding their entry into Heaven. There were professional sin eaters, the last of whom, Richard Munslow, died in 1906.

Americans started eating breakfast cereal as part of an anti-masturbation campaign, and in Africa, many girls have their clitorises cut off and vaginas sewn nearly shut in a much more vigorous anti-masturbation campaign.

The Etoro of Papua New Guinea believed that young boys between the ages of 7 and 17 must “ingest” the semen of older men daily in order to mature into men.

In Mozambique, there are people who kill bald men to get the gold they supposedly have inside their heads; in the DRC, there’s a belief that eating Pygmy people will give you magic powers.

People in Salem, Massachusetts, believed that teenage girls were a good source of information on which older women in the community were witches and needed to be hanged.

Flutes assume all sorts of strange roles in various Papuan and a few Brazilian cultures–only men are allowed to see, play, or listen to the flutes, and any women who violate the flute taboo are gang raped or executed. Additionally, “…the Keraki perform flute music when a boy has been sodomized and they fear he is pregnant. This summons spirits who will protect him from such humiliation.”

Spirit possession–the belief that a god or deity can take control of and speak/dance/act through a worshiper–is found in many traditions, including West African and Haitian Voodoo. If you read Things Fall Apart, then you remember the egwugwu, villagers dressed in masks who were believed to become the spirits of gods and ancestors. Things “fall apart” after a Christian convert “kills” one of the gods by unmasking him, leading other villagers to retaliate against the local Christian mission by burning it down.

In India, people traditionally murdered their moms by pushing them into their father’s funeral pyres (and those were the guys who didn’t go around randomly strangling people because a goddess told them to).

People in ancient [pretty much everywhere] believed that the gods and the deceased could receive offerings (burnt or otherwise,) of meat, chairs, clothes, games, slaves, etc. The sheer quantity of grave goods buried with the deceased sometimes overwhelmed the local economy, like in ancient Egypt.

Then there’s sympathetic magic, by which things with similar properties (say, yellow sap and yellow fever, or walnuts that look like brains and actual brains) are believed to have an effect on each other.

Madagascar has a problem with bubonic plague because of a local custom of digging up dead bodies and dancing around with them.

People all over the world–including our own culture–turn down perfectly good food because it violates some food taboo they hold.

All of these customs are either stupid or terrible ideas. Of course the dead do not really come back, Zeus does not receive your burnt offering, you can’t cure yellow fever by painting someone yellow and washing off the paint or by lying in a room full of snakes, and the evil eye isn’t real, despite the fact that progressives are convinced it is. A rabbit’s foot won’t make you lucky and neither will a 4-leaf clover, and your horoscope is meaningless twaddle.

Obviously NOT ALL culture is stupid. Most of the stuff people do is sensible, because if it weren’t, they’d die out. Good ideas have a habit of spreading, though, making them less unique to any particular culture.

Many of the bad ideas people formerly held have been discarded over the years as science and literacy have given people the ability to figure out whether a claim is true or not. Superstitions about using pendulums to tell if a baby is going to be a boy or a girl have been replaced with ultrasounds, which are far more reliable. Bleeding sick patients has been replaced with antibiotics and vaccinations; sacrifices to the gods to ensure good weather have been replaced with irrigation systems.

In effect, science and technology have replaced much of the stuff that used to count as “culture.” This is why I say “science is my culture.” This works for me, because I’m a nerd, but most people aren’t all that emotionally enthralled by science. They feel a void where all of the fun parts of culture have been replaced.

Yes, the fun parts.

I like that I’m no longer dependent on the whims of the rain gods to water my crops and prevent starvation, but this also means I don’t get together with all of my family and friends for the annual rain dance. It means no more sewing costumes and practicing steps; no more cooking a big meal for everyone to enjoy. Culture involves all of the stuff we invest with symbolic meaning about the course of our lives, from birth to coming of age to marriage, birth of our own children, to old age and death. It carries meaning for families, love, and friendship. And it gives us a framework for enjoyable activities, from a day of rest from our labors to the annual “give children candy” festival.

So when people say, “Whites have no culture,” they mean four things:

  1. A fish does not notice the water it swims in–whites have a culture, but don’t notice it because they are so accustomed to it
  2. Most of the stupid/wrong things whites used to do that we call “culture” have been replaced by science/technology
  3. That science/technology has spread to other cultures because it is useful, rendering white culture no longer unique
  4. Technology/science/literacy have rendered many of the fun or emotionally satisfying parts of ritual and culture obsolete.

Too often people denigrate the scientific way of doing things on the grounds that it isn’t “cultural.” This comes up when people say things like “Indigenous ways of knowing are equally valid as Western ways of knowing.” This is a fancy way of saying that “beliefs that are ineffective at predicting the weather, growing crops, curing diseases, etc, are just as correct as beliefs that are effective at doing these things,” or [not 1]=[1].

We shouldn’t denigrate doing things in ways that actually work; science must be respected as valid. We should, however, find new ways to give people an excuse to do the fun things that used to be tied up in cultural rituals.

 

Fame is Terrible for People

175px-Elvis_Presley_Jailhouse_Rock
Elvis

While researching my post on Music and Sex, I noticed a consistent pattern: fame is terrible for people.

Too many musicians to list have died from drug overdoses or suicide. Elvis died of a drug overdose. John Lennon attracted the attention of a crazy fan who assassinated him. Curt Cobain killed himself (or, yes, conspiracy theorists note, might have been murdered.) Linkin Park’s Chester Bennington committed suicide. Alice in Chains’s Layne Staley died of heroin. The list continues.

Far more have seen their personal relationships fail, time after time. The lives of stars are filled with breakups and drama, not just because tabloids care to report on them, but also because of the drugs, wealth, and easy availability of other partners.

At least musicians get something (money, sex,) out of their fame, and most went willingly into it (child stars, not so much). But many people today are thrust completely unwillingly into the spotlight and get nothing from it–people caught on camera in awkward incidents, or whose absurd video suddenly went viral for all the wrong reasons, or who caught the notice of an internet mob.

Here we have people like the students from Covington Catholic, or the coffee shop employee who lost her job after not serving a black woman who arrived after the shop had closed, or, for that matter, almost all of the survivors of mass shootings, especially the ones that attract conspiracy theorists.

It seems that fame, like many other goods, is a matter of decreasing returns. Going from zero fame to a little fame is nearly always good. Companies have to advertise products so customers know they exist. Being known as an expert in your field will net you lots of business, recommendations, or just social capital. Being popular in your school or community is generally pleasant.

At this level, increasing fame means increasing numbers of people who know and appreciate your work, while still remaining obscure enough that people who don’t like or care for your creations will simply ignore you.

Beyond a certain level of fame, though, you’ve already gotten the attention of most people who like you, and are now primarily reaching people who aren’t interested or don’t like you. If you become sufficiently famous, your fame alone will drive people who dislike your work to start complaining about how stupid it is that someone who makes such terrible work can be so famous. No one feels compelled to talk about how much they hate a local indie band enjoyed by a few hundred teens, but millions of people vocally hate Marilyn Manson.

Sufficient fame, therefore, attracts more haters than lovers.

This isn’t too big a deal if you’re a rock star, because you at least still have millions of dollars and adoring fans. This is a big deal if you’re just an ordinary person who accidentally became famous and wasn’t prepared in any way to make money or deal with the effects of a sudden onslaught of hate.

Fame wasn’t always like this, because media wasn’t always like this. There were no million-album recording artists in the 1800s. There were no viral internet videos in the 1950s. Just like in Texas, in our winner-take-all economy, fame is bigger–and thus so are its effects.

I think we need to tread this fame-ground very carefully. Recognize when we (or others) are thrusting unprepared people into the spotlight and withdraw from mobbing tactics. Teenagers, clearly, should not be famous. But more mundane people, like writers who have to post under their real (or well-known pseudonyms), probably also need to take steps to insulate themselves from the spasms of random mobs of haters. The current trend of writers taking mobs–at least SJW mobs–seriously and trying to appease them is another effect of people having fame thrust upon them that they don’t know how to deal with.

What is “Society”?

In Sociobiology, E. O. Wilson defines a “population” as a group that (more or less) inter-breeds freely, while a “society” is a group that communicates. Out in nature, the borders of a society and a population are usually the same, but not always.

Modern communication has created a new, interesting human phenomenon–our “societies” no longer match our “populations.”

Two hundred years ago, news traveled only as fast as a horse (or a ship,) cameras didn’t exist, and newspapers/books were expensive. By necessity, people got most of their information from the other people around them. One hundred yeas ago, the telegraph had sped up communication, but photography was expensive, movies had barely been invented, and information still traveled slowly. News from the front lines during WWI arrived home well after the battles occurred–probably contributing significantly to the delay in realizing that military strategies were failing horrifically.

Today, the internet/TV/cheap printing/movies/etc are knitting nations into conversational blocks limited only by language (and even that is a small barrier, given the automation of pretty effective translation), but still separated by national borders. It’s fairly normal now to converse daily with people from three or four different countries, but never actually meet them. 

This is really new, really different, and kind of weird.

Since we can all talk to each other, people are increasingly, it seems, treating each other as one big society, despite the fact that we hail from different cultures and live under different governments. What happens in one country or to one group of people reverberates across the world. An American comforts a friend in Malaysia who is sick to her stomach because of a shooting in New Zealand. Both agree that the shooting actually had nothing to do with a popular Swedish YouTuber, despite the shooter enjoining his viewers (while livestreaming the event) to “subscribe to Pewdiepie.” Everything is, somehow, the fault of the American president, or maybe we should go back further, and blame the British colonists.

It’s been a rough day for a lot of people.

Such big “societies” are unwieldy. Many of us dislike each other. We certainly can’t control each other (not without extreme tactics), and no one likes feeling blamed for someone else’s actions. Yet we all want each other to behave better, to improve. How to improve is a tough question.

We also want to be treated well by each other, but how often do we encounter people who are simply awful?

The same forces that knit us together also split us apart–and what it means to be a society but not a population remains to be seen.

Cyborg Dreams: Alita Review with Spoilers

220px-battle_angel_alita_28issue_1_-_cover29This is a review for Alita: Battle Angel, now out in theaters. If you want the review without spoilers, scroll down quickly to the previous post.

It is difficult for any movie to be truly deep. Is Memento deep, or does it just use a backwards-narrative gimmick? Often meaning is something we bring to movies–we interpret them based on our own experiences.

What is the point of cyborgs? They are the ultimate fusion of man and machine. Our technology doesn’t just serve us; it has become us.  What are we, then? Are cyborgs human, or more than human? And what of the un-enhanced meatsacks left behind?

Throughout the movie, we see humans with various levels of robotic enhancement, from otherwise normal people with an artificial limb to monstrous brawlers that are almost unrecognizable as human. Alita is a complete cyborg whose only “human” remain left is her biological brain (perhaps she has a skull, too.) The rest of her, from heart to toes, is machine, and can be disassembled and replaced as necessary.

The graphic novels go further than Alita–in one case, a whole community breaks down after it discovers that the adults have had their brains replaced with computer chips. Can a “human” have a metal body but a meat brain? Can a “human” have a meat body but a computer brain? Alita says yes, that humanity is more than just the raw material we are built of.

(It also goes much less–is Ido’s jet-powered hammer that he uses in battle any different from a jet-powered hammer built into your arm? Does it matter whether you can put the technology down and pack it into a suitcase at the end of the day, or if it is built into your core?)

Yet cyborgs in Alita’s world, despite their obvious advantage over mere humans in terms of speed, reflexes, strength, and ability to switch your arms out for power saws, are mostly true to their origin as disabled people whose bodies were replaced with artificial limbs. Alita’s first body, given to her at the beginning of the movie after she is found without one, was originally built for a little girl in a wheelchair. She reflects to a friend that she is now fast because the little girl’s father built her a fast pair of legs so she could finally run.

The upper class–to the extent that we see them–has no obvious enhancements. Indeed, the most upper class family we meet in the movie, which originally lived in the floating city of Tiphares (Zalem in the movie) was expelled from the city and sent down to the scrap yard with the rest of the trash because of their disabled daughter–the one whose robotic body Alita inherits.

Hugo is an ordinary meat boy with what we may interpret as a serious prejudice against cyborgs–though he comes across as a nice lad, he moonlights as a thief who who kidnaps cyborgs and chops off their body parts for sale on the black market. Hugo justifies himself by claiming he “never killed anyone,” which is probably true, but the process certainly hurts the cyborgs (who cry out in pain as their limbs are sawed off,) and leaves them lying disabled in the street.

Hugo isn’t doing it because he hates cyborgs, though. They’re just his ticket to money–the money he needs to get to Tiphares/Zalem. For even though it is said that no one in the Scrap Yard (Iron City in the movie) is ever allowed into Tiphares, people still dream of Heaven. Hugo believes a notorious fixer named Vector can get him into Tiphares if he just pays him enough money.

Some reviewers have identified Vector as the Devil himself, based on his line, “Better to reign in Hell than serve in Heaven,” which the Devil speaks in Milton’s Paradise Lost–though Milton is himself reprising Achilles in the Odyssey, who claims, “By god, I’d rather slave on earth for another man / some dirt-poor tenant farmer who scrapes to keep alive / than rule down here over all the breathless dead.” 

Yet the Scrap Yard is not Hell. Hell is another layer down; it is the sewers below the Scrap Yard, where Alita’s first real battle occurs. The Scrap Yard is Purgatory; the Scrap Yard is Earth, suspended between both Heaven and Hell, from which people can chose to arise (to Tiphares) or descend (to the sewers.) But whether Tiphares is really Heaven or just a dream they’ve been sold remains to be seen–for everyone in the Scrap Yard is fallen and none may enter Heaven.

Alita, you probably noticed, descended into Hell to fight an evil monster–in the manga, because he kidnapped a baby; in the movie because he was trying to kill her. In the ensuing battle, she is crushed and torn to pieces, sacrificing her final limb to drill out the monster’s eye. Her unconscious corpse is rescued by her friends, dragged back to the surface, and then rebuilt with a new body.

“I do not stand by in the presence of evil”–Alita

But let me reveal to you a wonderful secret. We will not all die, but we will all be transformed! 52 It will happen in a moment, in the blink of an eye, when the last trumpet is blown. For when the trumpet sounds, those who have died will be raised to live forever. And we who are living will also be transformed. 53 For our dying bodies must be transformed into bodies that will never die; our mortal bodies must be transformed into immortal bodies.” 1 Corinthians, 15 

Alita has died and been resurrected. Whether she will ascend into Heaven remains a matter for the sequel. (She does. Obviously.)

Through his relationship with Alita (they smooch), Hugo realizes that cyborgs are people, too, and maybe he shouldn’t chop them up for money.  “You are more human than anyone I know,” he tells her.

Alita, in a scene straight from The Last Temptation of Christ, offers Hugo her heart–literally–to sell to raise the remaining money he needs to make it to Tiphares.

Hugo, thankfully, declines the offer, attempting to make it to Tiphares on his own two feet (newly resurrected after Alita saves his life by literally hooking him up to her own life support system)–but no mere mortal can ascend to Tiphares; even giants may not assault the gates of Heaven.

The people of the Scrap Yard are fallen–literally–from Tiphares, their belongings and buildings either relics from the time before the fall or from trash dumped from above. There is hope in the Scrap Yard, yet the Scrap Yard generates very little of its own, explaining its entirely degraded state.

This is a point where the movie fails–the set builders made the set too nice. The Scrap Yard is a decaying, post-apocalyptic waste filled with strung-out junkies and hyper-violent-TV addicts. In one scene in the manga, Doc Ido, injured, collapses in the middle of a crowd while trying to drag the remains of Alita’s crushed body back home so he can fix her. Bleeding, he cries out for help–but the crowd, entranced by the story playing out on the screens around them, ignores them both.

In the movie, the Scrap Yard has things like oranges and chocolate–suggesting long-distance trade and large-scale production–things they really shouldn’t be able to do. In the manga, the lack of police makes sense, as this is a society with no ability to cooperate for the common good. Since the powers that be would like to at least prevent their own deaths at the hands of murderers, the Scrap Yard instead puts bounties on the heads of criminals, and licensed “Hunter Warriors” decapitate them for money.

(A hunter license is not difficult to obtain. They hand them out to teenage girls.)

Here the movie enters its discussion of Free Will.

Alita awakes with no memory of her life before she became a decapitated head sitting in a landfill. She has the body of a young teen and, thankfully, adults willing to look out for her as she learns about life in Iron City from the ground up–first, that oranges have to be peeled; second, that cars can run you over.

The movie adds the backstory about Doc Ido’s deceased, disabled daughter for whom he built the original body that he gives to Alita. This is a good move, as it makes explicit a relationship that takes much longer to develop in the manga (movies just don’t have the same time to develop plots as a manga series spanning decades.) Since Alita has no memory, she doesn’t remember her own name (Yoko). Doc therefore names her “Alita,” after the daughter whose body she now wears.

As an adopted child myself, I feel a certain kinship with narratives about adoption. Doc wants his daughter back. Alita wants to discover her true identity. Like any child, she is growing up, discovering love, and wants different things for her life than her father does.

Despite her amnesia, Alita has certain instincts. When faced with danger, she responds–without knowing how or why–with a sudden explosion of violence, decapitating a cyborg that has been murdering young women in her neighborhood. Alita can fight; she is extremely skilled in an advanced martial art developed for cyborgs. In short, she is a Martian battle droid that has temporarily mistaken itself for a teenage girl.

She begs Ido to hook her up to a stronger body (the one intended for his daughter was not built with combat in mind,) but he refuses, declaring that she has a chance to start over, to become something totally new. She has free will. She can become anything–so why become a battle robot all over again?

But Alita cannot just remain Doc’s little girl. Like all children, she grows–and like most adopted children, she wants to know who she is and where she comes from. She is good at fighting. This is her only connection to her past, and as she asserts, she has a right to that. Doc Ido has no right to dictate her future.

What is Alita? As far as she knows, she is trash, broken refuse literally thrown out through the Tipharean rubbish chute. The worry that you were adopted because you were unwanted by your biological parents–thrown away–plagues many adopted children. But as Alita discovers, this isn’t true. She’s not trash–she’s an alien warrior who once attacked Earth and ended up unconscious in the scrap yard after losing most of her body in the battle. Like the Nephilim, she is a heavenly battle angel who literally fell to Earth.

By day, Ido is a doctor, healing people and fixing cyborgs. By night, he is a Hunter Warrior, killing people. For Ido, killing is expression of rage after his daughter’s death, a way of channeling a psychotic impulse into something that benefits society by aiming it at people even worse than himself. But for Alita, violence serves a greater purpose–she uses her talent to eliminate evil and serve justice. Alita’s will is to protect the people she loves.

After Alita runs away, gets in a fight, descends into Hell, and is nearly completely destroyed, Doc relents and attaches her to a more powerful, warrior body. He recognizes that time doesn’t freeze and he cannot keep Alita forever as his daughter (a theme revisited later in the manga when Nova tries to trap Alita in an alternative-universe simulation where she never becomes a Hunter Warrior.

In an impassioned speech, Nova declares, “I spit upon the second law of thermodynamics!” He wants to freeze time; prevent decay. But even Nova, as we have seen, cannot contain Alita’s will. She knows it is a simulation. She plays along for a bit, enjoying the story, then breaks out.

Alita’s new body uses “nanotechnology,” which is to say, magic, to keep her going. Indeed, the technology in the movie is no more explained than magic in Harry Potter, other than some technobabble about how Alita’s heart contains a miniature nuclear reactor that could power the whole city, which is how she was able to stay alive for 300 years in a trash heap.

With her more powerful body, Alita is finally able to realize herself.

Alita’s maturation from infant (a living head completely unable to move,) to young adult is less explicit in the movie than in the manga, but it is still there–with the reconfiguration of her new body based on Alita’s internal self-image, Doc discovers that “She is a bit older than you thought she was.” In a dream sequence in the original, the metaphors are made explicit–limbless Alita in one scene becomes an infant strapped to Doc’s back as he roots through the dump for parts. Then she receives a pair of arms, and finally legs, turning into a toddler and a girl. Finally, with her berserker body, she achieves adulthood.

But with all of this religious imagery, is Tiphares really heaven? Of course not–if it were, why would Nova–who is the true villain trying to kill her–live there? There was a war in the Heavens–but the Heavens are far beyond Tiphares. Alita will escape Purgatory and ascend to Tiphares–and unlike the others, she will not do it by being chopped into body parts for Nova’s experiments.

For the mind is its own place, and can make a Heaven of Hell, and a Hell of Heaven.

Tiphares is only the beginning, just as the Scrap Yard is not the Hell we take it for.

Review: Battle Angel Alita 5/5 stars

mv5bnzvhmjcxyjytotvhos00mzq1lwfintatzmy2zmjjnjixmjllxkeyxkfqcgdeqxvyntc5otmwotq40._v1_ux182_cr00182268_al_I have not seen a movie aimed at adults, in an actual theater, in over a decade. Alita: Battle Angel broke my movie fast because I was a huge fan of the manga.

It was marvelous.

I can’t judge the movie from the perspective of someone who has seen every last Marvel installment, nor one who hasn’t read the manga. But it is visually stunning, with epic battle scenes and a philosophical core.

What does it mean to be human? Can robots be human? What about humanoid battle cyborgs? Alita is simultaneously human–a teenage girl searching for her place in this world–and inhuman–a devastating battle droid.

I don’t want to give away too many spoilers, so I’ll showcase the trailer:

Yes, she has giant anime eyes. You get used to it quickly.

I saw it in 3D, which was amazing–the technology we have for making and distributing movies in general is amazing, but this is a film whose action sequences really stand out in the medium.

The story is basically true to its manga inspiration, though there are obvious changes. The original story is much too long for a single movie, for example, and the characters often paused in the middle of battle for philosophical conversations. The movie lets the philosophy hang more in the background, (even skipping the Nietzsche.)

The movie’s biggest weakness was the main set, which was just too pleasant looking to be as gritty as the characters regarded it. There are a few other world-building inconsistencies, but nothing on the scale of “Why didn’t the giant eagles just fly the ring to Mordor?” or “how does money work in Harry Potter?”

51aam32ywvlThe movie has no shoe-horned-in political agenda–Alita never stops to whine about how women are treated in Iron City, for example, she just explodes with family-protecting violence. The plot is structured around class inequality, but this is a fairly believable backdrop since class is a real thing we all deal with in the real world. The movie does feature the “tiny girl who can beat up big bad guys” trope, but then, she is a battle droid made of metal, so Alita’s fighting skills make more sense than, say, River Tam’s.

Unfortunately, there are a few lose ends that are clearly supposed to carry into a sequel, which may not happen if all of the nay-sayers get their way. This makes the movie feel a touch unfinished–the story isn’t over.

So what’s with all of the bad reviews?

Over on Rotten Tomatoes, the critics gave the movie a 60% rating, while the movie going public has given it a 93% rating. That’s quite the split. Perhaps there are some movies that critics just don’t get, but certain fans love. But I note that other superhero movies, like Iron Man and Guardians of the Galaxy, received quite good reviews, despite the fact that pretty much all superhero movies are absurd if you think about them for too long. (GotG stars a raccoon, for goodness sake.)

Overall, if you like superhero/action movies, you will probably like Alita.

So why did I like the manga so much?

In part, it was just timing–I had a Japanese friend and we liked to hang out and watch anime together. In part it was artistic–Alita is a lovely character, and as a young female, I was smitten both with her cyborg good looks and the fact that she looks more like me than most superheroes. I spent much of my youth drawing cyborg girls of my own. Beyond that, it’s hard to say–sometimes you just like something.

What about you? Seen anything good, lately?

Book Club: How to Create a Mind, pt 2/2

Ray Kurzweil, writer, inventor, thinker

Welcome back to EvX’s Book Club. Today  are finishing Ray Kurzweil’s How to Create a Mind: The Secret of Human thought Revealed.

Spiders are interesting, but Kurzweil’s focus is computers, like Watson, which trounced the competition on Jeopardy!

I’ll let Wikipedia summarize Watson:

Watson was created as a question answering (QA) computing system that IBM built to apply advanced natural language processing, information retrieval, knowledge representation, automated reasoning, and machine learning technologies to the field of open domain question answering.[2]

The sources of information for Watson include encyclopedias, dictionaries, thesauri, newswire articles, and literary works. Watson also used databases, taxonomies, and ontologies. …

Watson parses questions into different keywords and sentence fragments in order to find statistically related phrases.[22] Watson’s main innovation was not in the creation of a new algorithm for this operation but rather its ability to quickly execute hundreds of proven language analysis algorithms simultaneously.[22][24] The more algorithms that find the same answer independently the more likely Watson is to be correct.[22] Once Watson has a small number of potential solutions, it is able to check against its database to ascertain whether the solution makes sense or not.[22]

Kurzweil opines:

That is at least one reason why Watson represents such a significant milestone: Jeopardy! is precisely such a challenging language task. … What is perhaps not evident to many observers is that Watson not only had to master the language in the unexpected and convoluted queries, but for the most part its knowledge was not hand-coded. It obtained that knowledge by actually reading 200 million pages of natural-language documents, including all of Wikipedia… If Watson can understand and respond to questions based on 200 million pages–in three seconds!–here is nothing to stop similar systems from reading the other billions of documents on the Web. Indeed, that effort is now under way.

A point about the history of computing that may be petty of me to emphasize:

Babbage’s conception is quite miraculous when you consider the era in which he lived and worked. However, by the mid-twentieth century, his ideas had been lost in the mists of time (although they were subsequently rediscovered.) It was von Neumann who conceptualized and articulated the key principles of the computer as we know it today, and the world recognizes this by continuing to refer to the von Neumann machine as the principal model of computation. Keep in mind, though, that the von Neumann machine continually communicates data between its various units and within those units, so it could not be built without Shannon’s theorems and the methods he devised for transmitting and storing reliable digital information. …

You know what? No, it’s not petty.

Amazon lists 57 books about Ada Lovelace aimed at children, 14 about Alan Turing, and ZERO about John von Neumann.

(Some of these results are always irrelevant, but they are roughly correct.)

“EvX,” you may be saying, “Why are you counting children’s books?”

Because children are our future, and the books that get published for children show what society deems important for children to learn–and will have an effect on what adults eventually know.

I don’t want to demean Ada Lovelace’s role in the development of software, but surely von Neumann’s contributions to the field are worth a single book!

*Slides soapbox back under the table*

Anyway, back to Kurzweil, now discussing quantum mechanics:

There are two ways to view the questions we have been considering–converse Western an Easter perspective on the nature of consciousness and of reality. In the Western perspective, we start with a physical world that evolves patterns of information. After a few billion years of evolution, the entities in that world have evolved sufficiently to become conscious beings In the Eastern view, consciousness is the fundamental reality, the physical word only come into existence through the thoughts of conscious beings. …

The East-West divide on the issue of consciousness has also found expression in opposing schools of thought in the field of subatomic physics. In quantum mechanics, particles exist in what are called probability fields. Any measurement carried out on them by a measuring device causes what is called a collapse of the wave function, meaning that the particle suddenly assumes a particular location. A popular view is that such a measurement constitutes observation by a conscious observer… Thus the particle assume a particular location … only when it is observed. Basically particles figure that if no one is bothering to look at them, they don’t need to decide where they are. I call this the Buddhist school of quantum mechanics …

Niels Bohr

Or as Niels Bohr put it, “A physicist is just an atom’s way of looking at itself.” He also claimed that we could describe electrons exercised free will in choosing their positions, a statement I do not think he meant literally; “We must be clear that when it comes to atoms, language can be used only as in poetry,” as he put it.

Kurzweil explains the Western interpretation of quantum mechanics:

There is another interpretation of quantum mechanics… In this analysis, the field representing a particle is not a probability field, but rather just a function that has different values in different locations. The field, therefore, is fundamentally what the particle is. … The so-called collapse of the wave function, this view holds, is not a collapse at all. … It is just that a measurement device is also made up of particles with fields, and the interaction of the particle field being measured and the particle fields of the measuring device result in a reading of the particle being in a particular location. The field, however, is still present. This is the Western interpretation of quantum mechanics, although it is interesting to note that the more popular view among physicists worldwide is what I have called the Eastern interpretation.

Soviet atomic bomb, 1951

For example, Bohr has the yin-yang symbol on his coat of arms, along with the motto contraria sunt complementa, or contraries are complementary. Oppenheimer was such a fan of the Bhagavad Gita that he read it in Sanskrit and quoted it upon successful completion of the Trinity Test, “If the radiance of a thousand suns were to burst at once into the sky, that would be like the splendor of the mighty one,” and “Now I am become death, the destroyer of worlds.” He credited the Gita as one of the most important books in his life.

Why the appeal of Eastern philosophy? Is it something about physicists and mathematicians? Leibnitz, after all, was fond of the I Ching. As Wikipedia says:

Leibniz was perhaps the first major European intellectual to take a close interest in Chinese civilization, which he knew by corresponding with, and reading other works by, European Christian missionaries posted in China. Having read Confucius Sinarum Philosophus on the first year of its publication,[153] he concluded that Europeans could learn much from the Confucian ethical tradition. He mulled over the possibility that the Chinese characters were an unwitting form of his universal characteristic. He noted with fascination how the I Ching hexagrams correspond to the binary numbers from 000000 to 111111, and concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophical mathematics he admired.[154] Leibniz communicated his ideas of the binary system representing Christianity to the Emperor of China hoping it would convert him.[84] Leibniz may be the only major Western philosopher who attempted to accommodate Confucian ideas to prevailing European beliefs.[155]

Leibniz’s attraction to Chinese philosophy originates from his perception that Chinese philosophy was similar to his own.[153] The historian E.R. Hughes suggests that Leibniz’s ideas of “simple substance” and “pre-established harmony” were directly influenced by Confucianism, pointing to the fact that they were conceived during the period that he was reading Confucius Sinarum Philosophus.[153]

Perhaps it is just that physicists and mathematicians are naturally curious people, and Eastern philosophy is novel to a Westerner, or perhaps by adopting Eastern ideas, they were able to purge their minds of earlier theories of how the universe works, creating a blank space in which to evaluate new data without being biased by old conceptions–or perhaps it is just something about the way their minds work.

As for quantum, I favor the de Broglie-Bohm interpretation of quantum mechanics, but obviously I am not a physicist and my opinion doesn’t count for much. What do you think?

But back to the book. If you are fond of philosophical ruminations on the nature of consciousness, like “What if someone who could only see in black and white read extensively about color “red,” could they ever achieve the qualia of actually seeing the color red?” or “What if a man were locked in a room with a perfect Chinese rulebook that told him which Chinese characters to write in response to any set of characters written on notes passed under the door? The responses are be in perfect Chinese, but the man himself understands not a word of Chinese,” then you’ll enjoy the discussion. If you already covered all of this back in Philosophy 101, you might find it a bit redundant.

Kurzweil notes that conditions have improved massively over the past century for almost everyone on earth, but people are increasingly anxious:

A primary reason people believe life is getting worse is because our information about the problems of the world has steadily improved. If there is a battle today somewhere on the planet, we experience it almost as if we were there. During World War II, tens of thousand of people might perish in a battle, and if the public could see it at all it was in a grainy newsreel in a movie theater weeks later. During World War I a small elite could read about the progress of the conflict in the newspaper (without pictures.) During the nineteenth century there was almost no access to news in a timely fashion for anyone.

As for the future of man, machines, and code, Kurzweil is even more optimistic than Auerswald:

The last invention that biological evolution needed to make–the neocortex–is inevitably leading to the last invention that humanity needs to make–truly intelligent machines–and the design of one is inspiring the other. … by the end of this century we will be able to create computation at the limits of what is possible, based on the laws of physics… We call matter and energy organized in this way “computronium” which is vastly more powerful pound per pound than the human brain. It will not jut be raw computation but will be infused with intelligent algorithms constituting all of human-machine knowledge. Over time we will convert much of the mass and energy in our tiny corner of the galaxy that is suitable for this purpose to computronium. … we will need to speed out to the rest of the galaxy and universe. …

How long will it take for us to spread our intelligence in its nonbiological form throughout the universe? … waking up the universe, and then intelligently deciding its fate by infusing it with our human intelligence in its nonbiological form, is our destiny.

Whew! That is quite the ending–and with that, so will we. I hope you enjoyed the book. What did you think of it? Will Humanity 2.0 be good? Bad? Totally different? Or does the Fermi Paradox imply that Kurzweil is wrong? Did you like this shorter Book Club format? And do you have any ideas for our next Book Club pick?

Book Club: How to Create a Mind by Ray Kurzweil pt 1/2

Welcome to our discussion of Ray Kurzweil’s How to Create a Mind: The Secret of Human thought Revealed. This book was requested by one my fine readers; I hope you have enjoyed it.

If you aren’t familiar with Ray Kurzweil (you must be new to the internet), he is a computer scientist, inventor, and futurist whose work focuses primarily on artificial intelligence and phrases like “technological singularity.”

Wikipedia really likes him.

The book is part neuroscience, part explanations of how various AI programs work. Kurzweil uses models of how the brain works to enhance his pattern-recognition programs, and evidence from what works in AI programs to build support for theories on how the brain works.

The book delves into questions like “What is consciousness?” and “Could we recognize a sentient machine if we met one?” along with a brief history of computing and AI research.

My core thesis, which I call the Law of Accelerating Returns, (LOAR), is that fundamental measures of of information technology follow predictable and exponential trajectories…

I found this an interesting sequel to Auerswald’s The Code Economy and counterpart to Gazzaniga’s Who’s In Charge? Free Will and the Science of the Brain, which I listened to in audiobook form and therefore cannot quote very easily. Nevertheless, it’s a good book and I recommend it if you want more on brains.

The quintessential example of the law of accelerating returns is the perfectly smooth, doubly exponential growth of the price/performance of computation, which has held steady for 110 years through two world was, the Great Depression, the Cold War, the collapse of the Soviet Union, the reemergence of China, the recent financial crisis, … Some people refer to this phenomenon as “Moore’s law,” but… [this] is just one paradigm among many.

From Ray Kurzweil,

Auerswald claims that the advance of “code” (that is, technologies like writing that allow us to encode information) has, for the past 40,000 years or so, supplemented and enhanced human abilities, making our lives better. Auerswald is not afraid of increasing mechanization and robotification of the economy putting people out of jobs because he believes that computers and humans are good at fundamentally different things. Computers, in fact, were invented to do things we are bad at, like decode encryption, not stuff we’re good at, like eating.

The advent of computers, in his view, lets us concentrate on the things we’re good at, while off-loading the stuff we’re bad at to the machines.

Kurzweil’s view is different. While he agrees that computers were originally invented to do things we’re bad at, he also thinks that the computers of the future will be very different from those of the past, because they will be designed to think like humans.

A computer that can think like a human can compete with a human–and since it isn’t limited in its processing power by pelvic widths, it may well out-compete us.

But Kurzweil does not seem worried:

Ultimately we will create an artificial neocortex that has the full range and flexibility of its human counterpart. …

When we augment our own neocortex with a synthetic version, we won’t have to worry about how much additional neocortex can physically fit into our bodies and brains, as most of it will be in the cloud, like most of the computing we use today. I estimated earlier that we have on the order of 300 million pattern recognizers in our biological neocortex. That’s as much as could b squeezed into our skulls even with the evolutionary innovation of a large forehead and with the neocortex taking about 80 percent of the available space. As soon as we start thinking in the cloud, there will be no natural limits–we will be able to use billions or trillions of pattern recognizers, basically whatever we need. and whatever the law of accelerating returns can provide at each point in time. …

Last but not least, we will be able to back up the digital portion of our intelligence. …

That is kind of what I already do with this blog. The downside is that sometimes you people see my incomplete or incorrect thoughts.

On the squishy side, Kurzweil writes of the biological brain:

The story of human intelligence starts with a universe that is capable of encoding information. This was the enabling factor that allowed evolution to take place. …

The story of evolution unfolds with increasing levels of abstraction. Atoms–especially carbon atoms, which can create rich information structures by linking in four different directions–formed increasingly complex molecules. …

A billion yeas later, a complex molecule called DNA evolved, which could precisely encode lengthy strings of information and generate organisms described by these “programs”. …

The mammalian brain has a distinct aptitude not found in any other class of animal. We are capable of hierarchical thinking, of understanding a structure composed of diverse elements arranged in a pattern, representing that arrangement with a symbol, and then using that symbol as an element in a yet more elaborate configuration. …

I really want to know if squids or octopuses can engage in symbolic thought.

Through an unending recursive process we are capable of building ideas that are ever more complex. … Only Homo sapiens have a knowledge base that itself evolves, grow exponentially, and is passe down from one generation to another.

Kurzweil proposes an experiment to demonstrate something of how our brains encode memories: say the alphabet backwards.

If you’re among the few people who’ve memorized it backwards, try singing “Twinkle Twinkle Little Star” backwards.

It’s much more difficult than doing it forwards.

This suggests that our memories are sequential and in order. They can be accessed in the order they are remembered. We are unable to reverse the sequence of a memory.

Funny how that works.

On the neocortex itself:

A critically important observation about the neocortex is the extraordinary uniformity of its fundamental structure. … In 1957 Mountcastle discovered the columnar organization of the neocortex. … [In 1978] he described the remarkably unvarying organization of the neocortex, hypothesizing that it was composed of a single mechanism that was repeated over and over again, and proposing the cortical column as the basic unit. The difference in the height of certain layers in different region noted above are simply differences in the amount of interconnectivity that the regions are responsible for dealing with. …

extensive experimentation has revealed that there are in fact repeating units within each column. It is my contention that the basic unit is a pattern organizer and that this constitute the fundamental component of the neocortex.

As I read, Kurzweil’s hierarchical models reminded me of Chomsky’s theories of language–both Ray and Noam are both associated with MIT and have probably conversed many times. Kurzweil does get around to discussing Chomsky’s theories and their relationship to his work:

Language is itself highly hierarchical and evolved to take advantage of the hierarchical nature of the neocortex, which in turn reflects the hierarchical nature of reality. The innate ability of human to lean the hierarchical structures in language that Noam Chomsky wrote about reflects the structure of of the neocortex. In a 2002 paper he co-authored, Chomsky cites the attribute of “recursion” as accounting for the unique language faculty of the human species. Recursion, according to Chomsky, is the ability to put together small parts into a larger chunk, and then use that chunk as a part in yet another structure, and to continue this process iteratively In this way we are able to build the elaborate structure of sentences and paragraphs from a limited set of words. Although Chomsky was not explicitly referring here to brain structure, the capability he is describing is exactly what the neocortex does. …

This sounds good to me, but I am under the impression that Chomsky’s linguistic theories are now considered outdated. Perhaps that is only his theory of universal grammar, though. Any linguistics experts care to weigh in?

According to Wikipedia:

Within the field of linguistics, McGilvray credits Chomsky with inaugurating the “cognitive revolution“.[175] McGilvray also credits him with establishing the field as a formal, natural science,[176] moving it away from the procedural form of structural linguistics that was dominant during the mid-20th century.[177] As such, some have called him “the father of modern linguistics”.[178][179][180][181]

The basis to Chomsky’s linguistic theory is rooted in biolinguistics, holding that the principles underlying the structure of language are biologically determined in the human mind and hence genetically transmitted.[182] He therefore argues that all humans share the same underlying linguistic structure, irrespective of sociocultural differences.[183] In adopting this position, Chomsky rejects the radical behaviorist psychology of B. F. Skinner which views the mind as a tabula rasa (“blank slate”) and thus treats language as learned behavior.[184] Accordingly, he argues that language is a unique evolutionary development of the human species and is unlike modes of communication used by any other animal species.[185][186] Chomsky’s nativist, internalist view of language is consistent with the philosophical school of “rationalism“, and is contrasted with the anti-nativist, externalist view of language, which is consistent with the philosophical school of “empiricism“.[187][174]

Anyway, back to Kuzweil, who has an interesting bit about love:

Science has recently gotten into the act as well, and we are now able to identify the biochemical changes that occur when someone falls in love. Dopamine is released, producing feelings of happiness and delight. Norepinephrin levels soar, which lead to a racing heart and overall feelings of exhilaration. These chemicals, along with phenylethylamine, produce elevation, high energy levels, focused attention, loss of appetite, and a general craving for the object of one’s desire. … serotonin level go down, similar to what happens in obsessive-compulsive disorder….

If these biochemical phenomena sound similar to those of the flight-or-fight syndrome, they are, except that we are running toward something or someone; indeed, a cynic might say toward rather than away form danger. The changes are also fully consistent with those of the early phase of addictive behavior. …  Studies of ecstatic religious experiences also show the same physical phenomena, it can be said that the person having such an experiences is falling in love with God or whatever spiritual connection on which they are focused. …

Religious readers care to weigh in?

Consider two related species of voles: the prairie vole and the montane vole. They are pretty much identical, except that the prairie vole has receptors for oxytocin and vasopressin, whereas the montane vole does not. The prairie vole is noted for lifetime monogamous relationships, while the montane vole resorts almost exclusively to one-night stands.

Learning by species:

A mother rat will build a nest for her young even if she has never seen another rat in her lifetime. Similarly, a spider will spin a web, a caterpillar will create her own cocoon, and a beaver will build a damn, even if no contemporary ever showed them how to accomplish these complex tasks. That is not to say that these are not learned behavior. It is just that he animals did not learn them in a single lifetime… The evolution of animal behavior does constitute a learning process, but it is learning by the species, not by the individual and the fruits of this leaning process are encoded in DNA.

I think that’s enough for today; what did you think? Did you enjoy the book? Is Kurzweil on the right track with his pattern recognizers? Are non-biological neocortexes on the horizon? Will we soon convert the solar system to computronium?

Let’s continue this discussion next Monday–so if you haven’t read the book yet, you still have a whole week to finish.

 

Book Club: The Code Economy, Chapter 11: Education and Death

Welcome back to EvX’s book club. Today we’re reading Chapter 11 of The Code Economy, Education.

…since the 1970s, the economically fortunate among us have been those who made the “go to college” choice. This group has seen its income row rapidly and its share of the aggregate wealth increase sharply. Those without a college education have watched their income stagnate and their share of the aggregate wealth decline. …

Middle-age white men without a college degree have been beset by sharply rising death rates–a phenomenon that contrasts starkly with middle-age Latino and African American men, and with trends in nearly every other country in the world.

It turns out that I have a lot of graphs on this subject. There’s a strong correlation between “white death” and “Trump support.”

White vs. non-white Americans

American whites vs. other first world nations

source

But “white men” doesn’t tell the complete story, as death rates for women have been increasing at about the same rate. The Great White Death seems to be as much a female phenomenon as a male one–men just started out with higher death rates in the first place.

Many of these are deaths of despair–suicide, directly or through simply giving up on living. Many involve drugs or alcohol. And many are due to diseases, like cancer and diabetes, that used to hit later in life.

We might at first think the change is just an artifact of more people going to college–perhaps there was always a sub-set of people who died young, but in the days before most people went to college, nothing distinguished them particularly from their peers. Today, with more people going to college, perhaps the destined-to-die are disproportionately concentrated among folks who didn’t make it to college. However, if this were true, we’d expect death rates to hold steady for whites overall–and they have not.

Whatever is affecting lower-class whites, it’s real.

Auerswald then discusses the “Permanent income hypothesis”, developed by Milton Friedman: Children and young adults devote their time to education, (even going into debt,) which allows us to get a better job in mid-life. When we get a job, we stop going to school and start saving for retirement. Then we retire.

The permanent income hypothesis is built into the very structure of our society, from Public Schools that serve students between the ages of 5 and 18, to Pell Grants for college students, to Social Security benefits that kick in at 65. The assumption, more or less, is that a one-time investment in education early in life will pay off for the rest of one’s life–a payout of such returns to scale that it is even sensible for students and parents to take out tremendous debt to pay for that education.

But this is dependent on that education actually paying off–and that is dependent on the skills people learn during their educations being in demand and sufficient for their jobs for the next 40 years.

The system falls apart if technology advances and thus job requirements change faster than once every 40 years. We are now looking at a world where people’s investments in education can be obsolete by the time they graduate, much less by the time they retire.

Right now, people are trying to make up for the decreasing returns to education (a highschool degree does not get you the same job today as it did in 1950) by investing more money and time into the single-use system–encouraging toddlers to go to school on the one end and poor students to take out more debt for college on the other.

This is probably a mistake, given the time-dependent nature of the problem.

The obvious solution is to change how we think of education and work. Instead of a single, one-time investment, education will have to continue after people begin working, probably in bursts. Companies will continually need to re-train workers in new technology and innovations. Education cannot be just a single investment, but a life-long process.

But that is hard to do if people are already in debt from all of the college they just paid for.

Auerswald then discusses some fascinating work by Bessen on how the industrial revolution affected incomes and production among textile workers:

… while a handloom weaver in 1800 required nearly forty minutes to weave a yard of coarse cloth using a single loom, a weaver in 1902 could do the same work operating eighteen Nothrop looms in less than a minute, on average. This striking point relates to the relative importance of the accumulation of capital to the advance of code: “Of the roughly thirty-nine-minute reduction in labor time per yard, capital accumulation due to the changing cost of capital relative to wages accounted for just 2 percent of the reduction; invention accounted for 73 percent of the reduction; and 25 percent of the time saving came from greater skill and effort of the weavers.” … “the role of capital accumulation was minimal, counter to the conventional wisdom.”

Then Auerswald proclaims:

What was the role of formal education in this process? Essentially nonexistent.

Boom.

New technologies are simply too new for anyone to learn about them in school. Flexible thinkers who learn fast (generalists) thus benefit from new technologies and are crucial for their early development. Once a technology matures, however, it becomes codified into platforms and standards that can be taught, at which point demand for generalists declines and demand for workers with educational training in the specific field rises.

For Bessen, formal education and basic research are not the keys to the development of economies that they are often represented a being. What drives the development of economies is learning by doing and the advance of code–processes that are driven at least as much by non-expert tinkering as by formal research and instruction.

Make sure to read the endnotes to this chapter; several of them are very interesting. For example, #3 begins:

“Typically, new technologies demand that a large number of variables be properly controlled. Henry Bessemer’s simple principle of refining molten iron with a blast of oxygen work properly only at the right temperatures, in the right size vessel, with the right sort of vessel refractory lining, the right volume and temperature of air, and the right ores…” Furthermore, the products of these factories were really one that, in the United States, previously had been created at home, not by craftsmen…

#8 states:

“Early-stage technologies–those with relatively little standardized knowledge–tend to be used at a smaller scale; activity is localized; personal training and direct knowledge sharing are important, and labor markets do not compensate workers for their new skills. Mature technologies–with greater standardized knowledge–operate at large scale and globally, market permitting; formalized training and knowledge are more common; and robust labor markets encourage workers to develop their own skills.” … The intensity of of interactions that occur in cities is also important in this phase: “During the early stages, when formalized instruction is limited, person-to-person exchange is especially important for spreading knowledge.”

This reminds me of a post on Bruce Charlton’s blog about “Head Girl Syndrome“:

The ideal Head Girl is an all-rounder: performs extremely well in all school subjects and has a very high Grade Point Average. She is excellent at sports, Captaining all the major teams. She is also pretty, popular, sociable and well-behaved.

The Head Girl will probably be a big success in life…

But the Head Girl is not, cannot be, a creative genius.

*

Modern society is run by Head Girls, of both sexes, hence there is no place for the creative genius.

Modern Colleges aim at recruiting Head Girls, so do universities, so does science, so do the arts, so does the mass media, so does the legal profession, so does medicine, so does the military…

And in doing so, they filter-out and exclude creative genius.

Creative geniuses invent new technologies; head girls oversee the implementation and running of code. Systems that run on code can run very smoothly and do many things well, but they are bad at handling creative geniuses, as many a genius will inform you of their public school experience.

How different stages in the adoption of new technology and its codification into platforms translates into wages over time is a subject I’d like to see more of.

Auerswald then turns to the perennial problem of what happens when not only do the jobs change, they entirely disappear due to increasing robotification:

Indeed, many of the frontier business models shaping the economy today are based on enabling a sharp reduction in the number of people required to perform existing tasks.

One possibility Auerswald envisions is a kind of return to the personalized markets of yesteryear, when before massive industrial giants like Walmart sprang up. Via internet-based platforms like Uber or AirBNB, individuals can connect directly with people who’d like to buy their goods or services.

Since services make up more than 84% of the US economy and an increasingly comparable percentage in coutnries elsewhere, this is a big deal.

It’s easy to imagine this future in which we are all like some sort of digital Amish, continually networked via our phones to engage in small transactions like sewing a pair of trousers for a neighbor, mowing a lawn, selling a few dozen tacos, or driving people to the airport during a few spare hours on a Friday afternoon. It’s also easy to imagine how Walmart might still have massive economies of scale over individuals and the whole system might fail miserably.

However, if we take the entrepreneurial perspective, such enterprises are intriguing. Uber and Airbnb work by essentially “unlocking” latent assets–time when people’s cars or homes were sitting around unused. Anyone who can find other, similar latent assets and figure out how to unlock them could become similarly successful.

I’ve got an underutilized asset: rural poor. People in cities are easy to hire and easy to direct toward educational opportunities. Kids growing up in rural areas are often out of the communications loop (the internet doesn’t work terribly well in many rural areas) and have to drive a long way to job interviews.

In general, it’s tough to network large rural areas in the same ways that cities get networked.

On the matter of why peer-to-peer networks have emerged in certain industries, Auerswald makes a claim that I feel compelled to contradict:

The peer-to-peer business models in local transportation, hospitality, food service, and the rental of consumer goods were the first to emerge, not because they are the most important for the economy but because these are industries with relatively low regulatory complexity.

No no no!

Food trucks emerged because heavy regulations on restaurants (eg, fire code, disability access, landscaping,) have cut significantly into profits for restaurants housed in actual buildings.

Uber emerged because the cost of a cab medallion–that is, a license to drive a cab–hit 1.3 MILLION DOLLARS in NYC. It’s a lucrative industry that people were being kept out of.

In contrast, there has been little peer-to-peer business innovation in healthcare, energy, and education–three industries that comprise more than a quarter of the US GDP–where regulatory complexity is relatively high.

Again, no.

There is a ton of competition in healthcare; just look up naturopaths and chiropractors. Sure, most of them are quacks, but they’re definitely out there, competing with regular doctors for patients. (Midwives appear to be actually pretty effective at what they do and significantly cheaper than standard ob-gyns.)

The difficulty with peer-to-peer healthcare isn’t regulation but knowledge and equipment. Most Americans own a car and know how to drive, and therefore can join Uber. Most Americans do not know how to do heart surgery and do not have the proper equipment to do it with. With training I might be able to set a bone, but I don’t own an x-ray machine. And you definitely don’t want me manufacturing my own medications. I’m not even good at making soup.

Education has tons of peer-to-peer innovation. I homeschool my children. Sometimes grandma and grandpa teach the children. Many homeschoolers join consortia that offer group classes, often taught by a knowledgeable parent or hired tutor. Even people who aren’t homeschooling their kids often hire tutors, through organizations like Wyzant or afterschool test-prep centers like Kumon. One of my acquaintances makes her living primarily by skype-tutoring Koreans in English.

And that’s not even counting private schools.

Yes, if you want to set up a formal “school,” you will encounter a lot of regulation. But if you just want to teach stuff, there’s nothing stopping you except your ability to find students who’ll pay you to learn it.

Now, energy is interesting. Here Auerswsald might be correct. I have trouble imagining people setting up their own hydroelectric dams without getting into trouble with the EPA (not to mention everyone downstream.)

But what if I set up my own windmill in my backyard? Can I connect it to the electric grid and sell energy to my neighbors on a windy day? A quick search brings up WindExchange, which says, very directly:

Owners of wind turbines interconnected directly to the transmission or distribution grid, or that produce more power than the host consumes, can sell wind power as well as other generation attributes.

So, maybe you can’t set up your own nuclear reactor, and maybe the EPA has a thing about not disturbing fish, but it looks like you can sell wind and solar energy back to the grid.

I find this a rather exciting thought.

Ultimately, while Auerswald does return to and address the need to radically change how we think about education and the education-job-retirement lifepath, he doesn’t return to the increasing white death rate. Why are white death rates increasing faster than other death rates, and will transition to the “gig economy” further accelerate this trend? Or was the past simply anomalous for having low white death rates, or could these death rates be driven by something independent of the economy itself?

Now, it’s getting late, so that’s enough for tonight, but what are your thoughts? How do you think this new economy–and educational landscape–will play out?