Book Club: Code Economy: Economics as Information Theory

If the suggestion… that the economy is “alive” seems fanciful or far-fetched, it is much less so if we consider the alternative: that it is dead.

Welcome back to your regularly scheduled discussion of Auerswald’s The Code Economy: a Forty-Thousand-Year History. Today we are discussing Chapter 9: Platforms, but feel free to jump into the discussion even if you haven’t read the book.

I loved this chapter.

We can safely answer that the economy–or the sum total of human provisioning, consumptive, productive and social activities–is neither “alive” nor, exactly, “non-living.”

The economy has a structure, yes. (So does a crystal.) It requires energy, like a plant, sponge, or macaque. It creates waste. But like a beehive, it is not “conscious;” voters struggle to make any kind of coherent policies.

Can economies reproduce themselves, like a beehive sending out swarms to found new hives? Yes, though it is difficult in a world where most of the sensible human niches have already been filled.

Auerswald notes that his use of the word “code” throughout the book is not (just) because of its modern sense in the coding of computer programs, but because of its use in the structure of DNA–we are literally built from the instructions in our genetic “code,” and society is, on top of that, layers and layers of more code, for everything from “how to raise vegetables” to “how to build an iPhone” to, yes, Angry Birds.

Indeed, as I have insisted throughout, this is more than an analogy: the introduction of production recipes into economics is… exactly what the introduction of DNA i to molecular biology. it is the essential first step toward a transformation of economics into a branch of information theory.

I don’t have much to say about information theory because I haven’t studied information theory, beyond once reading a problem about a couple of people named Alice and Bob who were trying to send messages to each other, but I did read Viktor Mayer-Schönberger and, Kenneth Cukier‘s Big Data: A revolution that will transform how we live, work, and think a couple of weeks ago. It doesn’t rise to the level of “OMG this was great you must read it,” but if you’re interested in the subject, it’s a good introduction and pairs nicely with The Code Economy, as many of the developments in “big data” are relevant to recent developments in code. It’s also helpful in understanding why on earth anyone sees anything of value in companies like Facebook and LinkedIn, which will be coming up soon.

You know, we know that bees live in a hive, but do bees know? (No, not for any meaningful definition of “knowing.”) But imagine being a bee, and slowly working out that you live in a hive, and that the hive “behaves” in certain ways that you can model, just like you can model the behavior of an individual bee…

Anyway:

Economics has a lot to say about how to optimize the level of inputs to get output, but what about the actual process of tuning inputs into outputs? … In the Wonderful World of Widgets that i standard economic, ingredients combine to make a final product, but the recipe by which the ingredients actually become the product is nowhere explicitly represented.

After some papers on the NK model and th shift in organizational demands from pre-industrial economic production to post-industrial large-scale production by mega firms, (or in the case of communism, by whole states,) Auerswald concludes that

…the economy manages increasing complexity by “hard-wiring” solution into standards, which in turn define platforms.

Original Morse Telegraph machine, circa 1835 https://en.wikipedia.org/wiki/Samuel_Morse

This is an important insight. Electricity was once a new technology, whose mathematical rules were explored by cutting-edge scientists. Electrical appliances and the grid to deliver the electricity they run on were developed by folks like Edison and Tesla.

But today, the electrical grid reaches nearly every house in America. You don’t have to understand electricity at all to plug in your toaster. You don’t have to be Thomas Edison to lay electrical lines. You just have to follow instructions.

Electricity + electrical appliances replaced many jobs people used to do, like candle making or pony express delivery man, but electricity has not resulted in an overall loss of jobs. Rather, far more jobs now exist that depend on the electrical grid (or “platform”) than were eliminated.

(However, one of the difficulties or problems with codifying things into platforms is then, systems have difficulty handling other, perfectly valid methods of doing things. Early codification may lock-in certain ways of doing things that are actually suboptimal, like how our computer keyboard layout is intentionally difficult to use not because of anything to do with computers, but because typewriters in the 1800s jammed if people typed on them too quickly. Today, we would be better off with a more sensible keyboard layout, but the old one persists because too many systems use it.)

The Industrial Revolution was a time of first technological development, and then encoding into platforms of many varieties–transportation networks of water and rail; electrical, sewer, and fresh water grids; the large-scale production of antibiotics and vaccines; and even the codification of governments.

The English, a nation of a couple thousand years or so, are governed under a system known as “Common Law,” which is just all of the legal precedents and traditions built up over that time that have come into customary use.

When America was founded, it didn’t have a thousand years of experience to draw on because, well, it had just been founded, but it did have a thousand years of cultural memory of England’s government. English Common Law was codified as the base of the American legal system.

The Articles of Confederation, famous only for not working very well, were the fledgling country’s first attempt at codifying how the government should operate. They are typically described as failing because they allocated insufficient power to the federal government, but I propose a more nuanced take: the Articles laid out insufficient code for dealing with nation-level problems. The Constitution solved these problems and instituted the basic “platform” on which the rest of the government is built. Today, whether we want to ratify a treaty or change the speed limit on i-405, we don’t have to re-derive the entire decisions-making structure from scratch; legitimacy (for better or for worse) is already built into the system.

Since the days of the American and French revolutions, new countries have typically had “constitutions,” not because Common Law is bad, but because there is no need to re-derive from scratch successful governing platforms–they can just be copied from other countries, just as one firm can copy another firms organizational structure.

Continuing with Auerswald and the march of time:

Ask yourself what the greatest inventions were over the past 150 years: Penicillin? The transistor? Electrical power? Each of these has been transformative, but equally compelling candidates include universal time, container shipping, the TCP/IP protocols underlying the Internet, and the GSM and CDMA standards that underlie mobile telephony. These are the technologies that make global trade possible by making code developed in one place interoperable with code developed in another. Standards reduce barriers among people…

Auerswald, as a code enthusiast, doesn’t devote much space to the downsides of code. Clearly, code can make life easier, by reducing the number of cognitive tasks required to get a job done. Let’s take the matter of household management. If a husband and wife both adhere to “traditional gender norms,” such as an expectation that the wife will take care of internal household chores like cooking and vacuuming, and the husband will take care of external chores, like mowing the lawn, taking out the trash, and pumping gas, neither spouse has to ever discuss “who is going to do this job” or wonder “hey, did that job get done?”

Following an established code thus aids efficiency and appears to decrease marital stress (there are studies on this,) but this does not mean that the code itself is optimal. Perhaps men make better dish-washers than women. Or for a couple with a disabled partner, perhaps all of the jobs would be better performed by reversing the roles.

Technological change also encourages code change:

The replacement of manual push-mowers with gas-powered mowers makes mowing the lawn easier for women, so perhaps this task would be better performed by housewives. (Even the Amish have adopted milking machines on the grounds that by pumping the milk away from the cow for you, the machines enable women to participate equally in the milking–a task that previously involved picking up and carrying around 90 lb milkjugs.)

But re-writing the entire code is work and involves a learning curve as both parties sort out and get used to new expectations. (See my previous thread on “emotional labor” and its relation to gender norms.) So even if you think the old code isn’t fair or optimal, it still might be easier than trying to make a new code–and this extends to far more human relations than just marriage.

And then you get cases where the new technology is incompatible with the old code. Take, for example, the relationship between transportation, weights and measures, and the French Revolution.

A country in which there is no efficient way to get from Point A to Point B has no need for a standardized set of weights and measures, as people in Community A will never encounter or contend with whatever system they are using over in Community B. Even if a king wanted to create a standard system, he would have difficulty enforcing it. Instead, each community tends to evolve a system that works well for its own needs. A community that grows bananas, for example, will come up with measures suitable to bananas, like the “bunch,” a community that deals in grain will invent the “bushel,” and a community that enumerates no goods, like the Piraha, will not bother with large quantities quantities.

(Diamonds are measured in “carats,” which have nothing to do with the orange vegetable, but instead are derived from the seeds of the carob tree, which apparently are small enough to be weighed against small stones.)

Since the French paid taxes, there was some demand for standardized weights and measures within each province–if your taxes are “one bushel of grain,” you want to make sure “bushel” is well defined so the local lord doesn’t suddenly define this year’s bushel as twice as big as last year’s bushel–and likewise, the lord doesn’t want this year’s bushel to be defined as half the size as last year’s.

But as roads improved and trade increased, people became concerned with making sure that a bushel of grain sold in Paris was the same as a bushel purchased in Nice, or that 5 carats of diamonds in Bordeaux was still 5 carats when you reached Cognac.

But the established local power of the local nobility made it very hard to change whatever measures people were using in each individual place. That is, the existing code made it hard to change to a more efficient code, probably because local lords were concerned the new measures would result in fewer taxes, and the local peasants concerned it would result in higher taxes.

Thus it was only with the decapitation of the Ancien Regime and wiping away of the privileges and prerogatives of the nobility that Revolutionary France established, as one of its few lasting reforms, a universal system of weights and measures that has come down to us today as the metric or SI system.

Now, speaking as an American who has been trained in both Metric and Imperial units, using multiple systems can be annoying, but is rarely deadly. On the scale of sub-optimal ideas, humans have invented far worse.

Quoting Richard Rhodes, The Making of the Atomic Bomb:

“The end result of the complex organization that was the efficient software of the Great War was the manufacture of corpses.

This essentially industrial operation was fantasized by the generals as a “strategy of attrition.” The British tried to kill Germans, the Germans tried to kill British and French and so on, a “strategy” so familiar by now that it almost sounds normal. It was not normal in Europe before 1914 and no one in authority expected it to evolve, despite the pioneering lessons of the American Civil War. Once the trenches were in place, the long grave already dug (John Masefield’s bitterly ironic phrase), then the war stalemated and death-making overwhelmed any rational response.

“The war machine,” concludes Elliot, “rooted in law, organization, production, movement, science, technical ingenuity, with its product of six thousand deaths a day over a period of 1,500 days, was the permanent and realistic factor, impervious to fantasy, only slightly altered by human variation.”

No human institution, Elliot stresses, was sufficiently strong to resist the death machine. A new mechanism, the tank, ended the stalemate.”

Russian Troops waiting for death

On the Eastern Front, the Death Machine was defeated by the Russian Revolution, as the canon fodder decided it didn’t want to be turned into corpses anymore.

I find World War I more interesting than WWII because it makes far less sense. The combatants in WWII had something resembling sensible goals, some chance of achieving their goals, and attempted to protect the lives of their own people. WWI, by contrast, has no such underlying logic, yet it happened anyway–proof that seemingly logical people can engage in the ultimate illogic, even as it reduces whole countries to nothing but death machines.

Why did some countries revolt against the cruel code of war, and others not? Perhaps an important factor is the perceived legitimacy of the government itself (though regular food shipments are probably just as critical.) Getting back to information theory, democracy itself is a kind of blockchain for establishing political legitimacy, (more on this in a couple of chapters) which may account for why some countries perceived their leadership as more legitimate, and other countries suddenly discovered, as information about other people’s opinions became easier to obtain, that the government enjoyed very little legitimacy.

But I am speculating, and have gotten totally off-topic (Auerswald was just discussing the establishment of TCP/IP protocols and other similar standards that aid international trade, not WWI!)

Returning to Auerswald, he cites a brilliant quote from Alfred North Whitehead

“Civilization advances by extending the number of operations we can perform without thinking about them.”

As we were saying, while sub-optimal (or suicidal) code can and does destroy human societies, good code can substantially increase human well-being.

The discovery and refinement of new inventions, technologies, production recipes, etc., involves a steep learning curve as people first figure out how to make the thing work and to source and put together all of the parts necessary to build it, (eg, the invention of the automobile in the late 1800s and early 1900s,) but once the technology spreads, it simply becomes part of the expected infrastructure of everyday life (eg, the building of interstate highways and gas stations, allowing people to drive cars all around the US,) a “platform” on which other, future innovations build. Post-1950, most automobile-driven innovation was located not in refinements to the engines or brakes, but in things you can do with vehicles, like long-distance shipping.

Interesting things happen to income as code becomes platforms, but I haven’t worked out all of the details.

Continuing with Auerswald:

Note that, in code economics, a given country’s level of “development” is not sensibly measured by the total monetary value of all goods and services it produces…. Rather, the development of a country consists of … it capacity to execute more complex code. …

Less-developed countries that lack the code to produce complex products will import them, and they will export simpler intermediate products and raw materials in order to pay for the required imports.

By creating and adhering to widely-observed “standards,” increasing numbers of countries (and people) are able to share inventions, code, and development.

Of the drivers of beneficial trade, international standards are at once among the most important and the least appreciated. … From the invention of bills of exchange in the Middle Ages … to the creation of twenty-first-century communications protocols, innovations in standards have lowered the cost and enhanced the value of exchange across distance. …

For entrepreneurs in developing countries, demonstrated conformity with international standards… is a universally recognized mark of organizational capacity that substantially eases entry into global production and distribution networks.

In other words, you are more likely to order steel from a foreign factory if you have some confidence that you will actually receive the variety you ordered, and the factory can signal that it knows what it is doing and will actually deliver the specified steel by adhering to international standards.

On the other hand, I think this can degenerate into a reliance on the appearance of doing things properly, which partially explains the Elizabeth Holmes affair. Holmes sounded like she knew what she was doing–she knew how to sound like she was running a successful startup because she’d been raised in the Silicon Valley startup culture. Meanwhile, the people investing in Holmes’s business didn’t know anything about blood testing (Holmes’s supposed invention tested blood)–they could only judge whether the company sounded like it was a real business.

Auerswald then has a fascinating section comparing each subsequent “platform” that builds on the previous “platform” to trophic levels in the environment. The development of each level allows for the development of another, more complex level above it–the top platform becomes the space where newest code is developed.

If goods and services are built on platforms, one atop the other, then it follows that learning at higher levels of the system should be faster than learning at lower levels, for the simple reason that leaning at higher levels benefits from incremental learning all the way down.

There are two “layers” of learning. Raw material extraction shows high volatility around a a gradually increasing trend, aka slow learning. By contrast, the delivery of services over existing infrastructure, like roads or wireless networks, show exponential growth, aka fast learning.

In other words, the more levels of code you already have established and standardized into platforms, the faster learning goes–the basic idea behind the singularity.

 

That’s all for today. See you next week!

A Response to Epigenetics and Ethics: Rights and Consequences

Dr. Robison–author of Epigenetics and Public Policy–asks and essential question: Where does the right to swing one’s epigenome end? Or as he puts it:

If epigenetics does introduce scientific novelties to the conventional understanding of biology, then according to the model it also has equally significant ethical and political implications.

What responsibility do I–as an egg-bearing person–have to ensure the health of my children and grandchildren’s epigenenomes? Society affirms my right to smoke cigarettes, even though they may give me cancer down the road–it’s my body and I am allowed to do what I wish with it. But what if my smoking cigarettes today causes cancer in a future, as yet unborn grandchild whom I never meet? What about her right to chose not to be exposed to carcinogens? Who am I to take that from her–and what right has society, the government, or anyone else to tell me what I may or may not do with my own body in the interests of some future people who may never come into existence?

I am summarizing, perhaps badly; you may read the whole post over on Dr. Robison’s blog. (Of course Robison is himself trying to summarize an argument I am sure he lays out in much more detail in his book.)

Here is my hastily written response, in the interest of clear conversational threading:

I’m not sure epigenetics constitutes such a fundamental shift in our understandings of genetics and inheritance as to actually warrant much change in our present policies. For example, you question whether policies should be enacted to restrict a 12 yr old girl’s right to eat what she wishes in defense of her unborn grandchild’s epigenome, but we today don’t even restrict a pregnant woman’s right to drink or smoke. Cocaine is illegal, but last time I checked, women didn’t go to prison for giving birth to crack babies. For that matter, women are allowed to kill unborn babies. I’m not commenting pro or against abortion, just noting that it is legal and most people consider death kind of a big deal. So I don’t think society is about to start outlawing stuff because of its negative effects two generations down the road.

On the other hand, if you look at the data on smoking, rates have definitely been falling ever since the tobacco-cancer link became news. The gov’t didn’t have to outlaw smoking for a lot of women to stop smoking for their children’s health.

But let’s return to the philosophical argument. All men are created equal… or are they? I do not think the Founding Fathers ever meant equality in a genetic sense. They could see with their own eyes that some men were tall and others short, some wise and others foolish, some virtuous and others criminal. They could see see that sons and daughters took after their parents and that a great many people started life in horribly unfair circumstances while others lived in luxury. They could see the cruel unfairness of disease, disability, and early death. Their rejection was not of biological or factual inequalities but of spiritual inequality. They rejected the notion that some men are created special by God to rule over others, and some men are created inferior by God, to be ruled over.

You state, “However, the evidence emerging from epigenetics suggests this is not the case. Instead of individuals of each generation being born with a pristine copy of their biological essence, they are inheriting a genetic endowment riddled with markers of the experiences of their parents and grandparents and great-grandparents, and so on. And these inherited epigenetic markers, as more and more research is showing, are having direct effects on the physical and mental health of individuals from causes not actually experienced by these individuals.”

I think there is a mistake here in regarding genetics as “pristine” in some form. What if my mother is an anxious person, and I, through environmental exposure, grow into a similarly anxious person? What if my mother has a gene for anxiety, and I inherit it? What if I possess a de novo genetic mutation that causes me to be anxious? And what if I suffer a genetic deletion in one of my chromosomes that causes anxiety? How is any of this different, functionally, from some trauma my mother suffered (say, a car accident) causing epigenetic changes that are subsequently passed on to me?

What is pristine about Down’s Syndrome, Williams’, or Klinefelter’s? Or just having the random bad luck to get genes for short, dumb, and ugly?

“For example, research in epigenetics shows that the choices and experiences of individuals in one generation are conditioning the basic nature of individuals of subsequent generations, which indelibly affects how those new individuals will exercise their own rights. ”

It can’t be indelible. For starters, you only inherit half of each parent’s genome–thus half their epigenome. So right there’s a 50% chance you won’t inherit any particular epigenetic marker. By gen two we’re talking 25% chance, and that’s not counting the constant re-writing of our epigenomes. However, I don’t think the policy implications for countries are all that different from our current thinking. We can say, for example, “If we have X level of pollution in the water, then Y number of people will get cancer,” and it’s a public health problem even if we don’t know “they’ll get cancer because of epigenetics.”

So let’s broaden the inquiry a bit. Not how does epigenetics impact classical liberalism (which is behind us, anyway,) but how do genetics, epigenetics, heritability, et at all influence our modern sensibilities? Modern liberalism is built almost as a reaction against former racialist notions of “blood”, with a consequent belief that people are, on average, about genetically equal. This butts up against the realization that some people are gifted and talented from birth, which many people quietly rationalize away while knowing they are being a bit dishonest, perhaps on the grounds that this is tantamount to statistical noise.

But the whole notion of “meritocracy” becomes more problematic if we admit that there’s a large genetic (or accidental, or environmental, or anything outside of free will,) contribution to IQ, educational attainment, mental illness, your chances of getting a good job, how other people treat you (because of attractiveness,) etc. Should a person who is dumb through no fault of their own suffer poverty? Should an ugly person be denied a job or a date? There’s an essential unfairness to it, after all.

But by the same token, what are you going to do about it? Declare that everyone under a certain IQ gets free money? What sort of incentives does that set up for society? And what does it do to someone’s self-image if they are Officially Delcared Stupid?

But this is all focused on the negative. What if we find ways to make people smarter, healthier, stronger? I think we’d take them. Sure, we’d have a few hold-outs who worry about “playing god,” (much as today we have people who worry about vaccines despite the massive health improvements public vaccination campaigns have cause.) But in the end we’d take them. Similarly, in the end, I think most people would try to avoid damaging their descendants’ epigenomes–even if not through direct public policy.

 

Addendum: while I am skeptical of most claims about epigenetics, eg, people claiming that epigenetic trauma can be transmitted for over a century, there do seem to be some things that cause what we can here characterize as multi-generational epigenetic effects. For example, the drug diethylstilbestrol (DES), given to pregnant women to prevent miscarriages back in the 70s, not only causes cancer in the women it was given to, but also in their daughters. (It also results in intersex disorders in male fetuses.) In the third generation (that is, the sons daughters of the fetuses that were exposed to DES their mothers took during pregnancy,) there are still effects, like an increased risk of irregular periods. This is not necessarily “epigenetic” but similar enough to include in the conversation.

Betrayal

article-0-11CC61E6000005DC-678_634x478

The US government tested the effects of nuclear radiation and atomic warfare on live, human subjects–our own soldiers. Called the Desert Rock Exercises, (and Operation Plumbbob,) they destroyed the lives of thousands of Americans.

“In Operation Desert Rock, the military conducted a series of nuclear tests in the Nevada Proving Grounds between 1951 and 1957, exposing thousands of participants – both military and civilian – to high levels of radiation.

“In total, more nearly 400,000 American soldiers and civilians would be classified as ‘atomic veterans.’

“Though roughly half of those veterans were survivors of World War II, serving at Hiroshima and Nagasaki, Japan, the rest were exposed to nuclear grounds tests which lasted until 1962.”

Sure, we could have tested it on pigs, or monkeys, or cows, but nothing beats marching your own people into an atomic blast to see if it gives them cancer.

article-0-11CC6202000005DC-931_634x479 3547111592_efc32a4913_orbig 0100_2715_unlv_libraries_special_collections_WEB_t1000 hqdefault

Of course it gives them cancer.

The Soviets did similar things to their own soldiers. In 1954, the Soviets dropped a 40,000-ton atomic weapon on 45,000 of their own troops, just north of Totskoye. More on Totskoye, and more. I don’t know for sure if these photos are from those tests, but they’re awfully haunting:

strange-photo strange-photo-3

One of my–let us say Uncles–died in Vietnam. He was 17. His mother, who had signed the papers to let him enlist even though he wasn’t 18, who had thought the army would be a good thing for him, sort him out, get his life on track, never recovered.

His name is not on the Vietnam Memorial.

image 004VietnamWar_468x382

And for what did we die in France’s war to retain its colonies?

I think I’m starting to understand these guys:

vietnamveterans

NEW EAGLE AND PATCH DESIGN 05-15-10-viet-nam-vets

“If you can’t say something nice, don’t say anything at all”

Back in anthropology class, we talked rather extensively about ethics–specifically, What if you write things that hurt the people you’re studying?

Take the Yanomami (also spelled Yanomamo, not to be confused with the Yamnaya.) While there is some dispute over why the Yanomami are violent, and they probably aren’t the most violent people on Earth, they certainly commit their fair share of violence:

Graph from the Wikipedia
See also my post, “No, Hunter Gatherers were not Peaceful Paragons of Gender Egalitarianism.

In 1967, anthropologist Napoleon Alphonseau Chagnon published Yanomamö: The Fierce People, which quickly became a bestseller, both for anthropology students and the popular public. On the less scholarly side, Ruggero Deodato released the horror film “Cannibal Holocaust” in 1980, fictional account of Yanomami cannibalism. I have truly never understood the interest in or psychology behind the horror genre, but apparently the film has been banned in 50 countries and highly regarded by people who like that sort of thing–Total Film ranked it the tenth greatest horror film of all time, Wired included it in its list of the top 25 horror films, and IGN ranked it 8th on its list of the ten greatest grindhouse films–so I assume that means it was very horrific and very popular.

Aside from allegations that the anthropologists themselves gave/traded the Yanomami a bunch of weapons that resulted in a big increase in the number of deaths, there are claims that, due to the Yanomamis’ violent reputation, miners in the area have opted to just shoot them on sight. The Yanomami are a rather small, isolated people with no political power to speak of, little immunity to Western diseases, and a lot of negative interactions with miners.

In a more mundane example, an anthropologist I spoke with referred to widespread illegal drug use among a people they had lived with. No one wants to cause legal trouble for their friends/companions/hosts–getting people arrested after they opened their homes and lives to you would be really shitty. But the behavior remains. None of us is perfect; every group does things that other people disapprove of or that would not look so great if someone wrote it down and published it.

The Thumper approach–that most favored by Americans, I suspect–is to just try not to say anything that might be perceived as not-nice. Report all of the good, and just leave out the bad.

My inner Kantian insists that this is lying, and that lying is bad. Maybe I can gloss over illegal drug use, but if I write a book about the Yanomami and don’t mention violence, I am a baldfaced liar, and you, my reader, should be mad at me for deceiving you. If I am in the business of describing humans and fail to do so, then like a stool with two legs, I am not doing my job.

The other solution I see people employ is to dress up all of the potentially negative things in dry, technical language so that normal people don’t notice. This would be the opposite of making a movie like Cannibal Holocaust. We use terms like “genetic introgression” and “affective empathy” and “clades,” which people who don’t generally read technical materials on the subject tend not to be familiar with.

But there are times when there are no two ways about it–there are no polite, responsible ways to disguise the truth, you don’t want to lie, and you genuinely don’t want to cause distress or trouble for anyone.

We never came up with a solution back in class. I still don’t have one.

Let’s Talk Math

Here’s a hypothetical for you. Suppose Population A and Population B both live in a country. Population A started the place. They built it from scratch–farms, roads, transportation networks, the whole sheboodle. Population B doesn’t do so well financially, educationally, or organizationally, but they sure do have a lot of kids. In fact, while PopA has a modest 2 kids per couple, PopB does its best to turn out 6 or 7.

Most of PopA is totally oblivious, but a few smart guys in PopA can do math, and realize that pretty soon, PopB will outnumber them. Some among them start claiming that if they let PopB run things, well, things’ll just stop running.

Ethically speaking should PopA do?

Conservation of Caring

I hypothesize that humans have only so many shits to give.

Some of us start out with more inherent ability to care than others do, but however much caring you’ve got in you, there’s probably not a lot you can do to increase it beyond that basic amount.

What you can do, however, is shift it around.

If things are going really badly for yourself, you’ll dedicate most of your energy to yourself–dealing with sickness, job loss, divorce, etc., leave very little energy leftover for anyone else. You are simply empty. You have no more shits to give.

If things are going badly for someone close to you–family or friend–you’ll dedicate much of your energy to them. A sick or suffering child, for example, will completely absorb your care.

Beyond your immediate circle of close friends and family, the ability to care about others drops dramatically, as the number of others increases dramatically. You might give a suffering acquaintance $5 or an hour of your time, but it is rare to otherwise go out of one’s way for strangers.

There are just way too many people in this category to care deeply about all of them. You don’t have that much time in your day. You can, however, care vaguely about their well-being. You can read about an earthquake in Nepal and feel really bad for those people.

One of the goals of moralists and philosophers has been (I think) to try to increase peoples’ concern for the well-being of others. If concern for others can actually be *increased*, then we may be able to care about ever-bigger groups of people. This would be especially good for people in modern society, as we now live among millions of people in countries of hundreds of millions on a planet with billions, while possessing nuclear weapons and the ability to destroy our own environment, it is pretty important that we feel at least some vague feelings of responsibility toward people who are not within our immediate friend/family circles.

Even if moralizers and the like can only cause a small increase in the amount of caring we can do, that still could be the difference between nuking a million people or not, so that’s still a valuable thing to try.

(Note that this kind of large-scale concern is probably entirely evolutionarily novel, as the ability to even know that people exist on the other side of the planet is evolutionarily novel. Most people throughout human history lived more or less in tiny hunter-gatherer bands and people not in their bands were basically enemies; it is only in a handful of countries over the past couple thousand years or so that this basic pattern has shifted.)

But to the extent that the number of shits we can give is fixed, we might end up just shuffling around our areas of concern.

And doing that seems likely to be prone to a variety of difficulties, like outrage fatigue (being unable to sustain a high level of caring for very long,) missing vital things that we should have been concerned about while being concerned about other things, and fucking things up via trying to fix problems we don’t actually know the first thing about and then getting distracted by the next concerning thing without ever making sure we actually improved things.

Well-meaning people often try hard to care about lots of things; they feel like they should be, somehow, treating others as they would themselves–that is, extending to everyone in the world the same level of caring and compassion. This is physically impossible, which leads to well-meaning people feeling bad about their inability to measure up to their standards of goodness. As Scot Alexander points out, it’s better to set reasonable goals for yourself and accomplish them than to set unreasonable goals and then fail.

My own recommendation is to beware of “caring” that is really just social posturing (putting someone down for not being hip to the latest political vocabulary, or not knowing very much about an obscure issue,) or any case of suddenly caring about the plight of “others” far away from you whom you didn’t care about five minutes ago. (Natural disasters excepted, as they obviously cause a significant change in people’s conditions overnight.) Understand your limits–realize that trying to solve problems of people you’ve never met and whom you know virtually nothing about is probably not going to work, but you can make life better for your friends, family, and local community. You can concentrate on understanding a few specific issues and devote time and resources to those.

Effective Altruists are Cute but Wrong

Note: For a more complete derivation, start with Survival of the Moral-ist (and Morality is what other People Want you to do.)

Effective Altruists mean well. They want to be moral people. They just don’t quite get what morality is. This leads to amusing results like EAs stressing out to themselves about whether or not they should donate all of their money to make animals happy and, if they don’t sacrifice themselves to the “save the cockroaches” fund, are they being meta-inconsistent?

The best synthesis of game-theoretic morality and evolutionary morality is that morality is about mutual systems of responsibility toward each other. You have no moral duties toward people (animals, beings, etc.,) who have none toward you. Your dog loves you and would sacrifice himself for you, so you have a moral obligation to your dog. A random animal feels no obligation to you and would not help you in even the most dire of situations. You have no moral obligation to them. (Nice people are nice to animals anyway because niceness is a genetic trait, and so nice people are nice to everyone.)

The EA calculations fail to take into account the opportunity cost of your altruism: if I donate all of my money to other animals, I no longer have money to buy bones for my dog, and my dog will be unhappy. If I spend my excess money on malaria nets for strangers in Africa, then I can’t spend that money on medical treatment for my own children.

If you feel compelled to do something about Problem X, then it’s a good idea to take the EA route and try to do so effectively. If I am concerned about malaria, then of course I should spend my time/money doing whatever is best to fight malaria.

As I mentioned in my post about “the other”, a lot of people just use our ideas/demands about what’s best for people they really have no personal interest in as a bludgeon in political debates with people they strongly dislike. If you are really concerned about far-off others, then by all means, better the EA route than the “destroying people’s careers via Twitter” route.

But morality, ultimately, is about your relationships with people. EA does not account for this, and so is wrong.

Anonymous Sex with Strangers still Spreads Disease…

surprising no one but the idiots having anonymous sex with strangers.

Grindr and Tinder creating health epidemics:

“Casual and anonymous sex arranged via social media sites, such a Tinder and Grindr, has led to an increase in STDs across the US state of Rhode Island, health officials have announced.

The Rhode Island Department of Health announced that between 2013 and 2014, there was a 79% increase in syphilis, a 30% increase in gonorrhoea and a 33% increase in HIV. …

A statement from the department said: “The recent uptick in STDs in Rhode Island follows a national trend. The increase has been attributed to better testing by providers and to high-risk behaviours that have become more common in recent years.”

Between my upbringing by Christian conservatives and college days surrounded by bi-poly-pagans and BDSM fetishists (you laugh, but it’s true,) I’ve had to spend a lot of time trying to figure out what the hell morality is. And since a lot of people are shit at actually explaining anything, I have generally defaulted to a holistic approach of whether or not a particular approach leads to human suffering, or whether the person claiming a certain morality is generally a decent human or not.

Since then, I’ve developed better ways of understanding morality, but these rules of thumb still apply: if you are hurting other people for purely self-centered reasons, like giving them diseases just so you can have sex, or creating a system that spreads diseases so you can get rich, then you are a terrible human.

Tolerance is a Meta-Value

Tolerance doesn’t mean liking what other people do. It just means not interfering with them.

If neither of us can get the upper hand, then it is sensible to institute a non-interference policy. But if one of us could get the upper hand, tolerance becomes something we do out of a more sociable moral conviction.

Tolerance is a core American value, because of its importance in the founding of the country. As such, people on all sides of the political aisles have generally espoused it, at least in theory. Even people who are very strict in their personal opinions about how people should conduct themselves can agree, generally, that we should tolerate people who disagree with them.

Difficulties with tolerance:
1. Some groups/people are more tolerant of each other than other groups.
2. Tolerating people who don’t tolerate you back is generally a bad idea.
3. Some groups/people do things that other groups find really heinous.
4. Third parties who did not consent to be part of a society, like children, can still be affected by it.

5. Mistaking tolerance for a primary value rather than a meta-value. This leads to people trying to force other people to be tolerant, which quickly starts looking like intolerance.

These suggest some practical limits to tolerance, even though I generally argue that people should be more tolerant.

Studies: Disgust, Prisoner’s Dilemma

Disgust leads people to lie and cheat; cleanliness leads to ethical behavior

and

Prisoners better at Prisoner’s Dilemma than non-Prisoners

 

Quotes:
“… In one experiment, participants evaluated consumer products such as antidiarrheal medicine, diapers, feminine care pads, cat litter and adult incontinence products. In another, participants wrote essays about their most disgusting memory. In the third, participants watched a disgusting toilet scene from the movie “Trainspotting.” Once effectively disgusted, participants engaged in experiments that judged their willingness to lie and cheat for financial gain. Mittal and colleagues found that people who experienced disgust consistently engaged in self-interested behaviors at a significantly higher rate than those who did not.

“In another set of experiments, after inducing the state of disgust on participants, the researchers then had them evaluate cleansing products, such as disinfectants, household cleaners and body washes. Those who evaluated the cleansing products did not engage in deceptive behaviors any more than those in the neutral emotion condition.

“At the basic level, if you have environments that are cleaner, if you have workplaces that are cleaner, people should be less likely to feel disgusted,” Mittal said. “If there is less likelihood to feel disgusted, there will be a lower likelihood that people need to be self-focused and there will be a higher likelihood for people to cooperate with each other.” ”

SO GO WASH YOUR HANDS!

and…

“for the simultaneous game, only 37% of students cooperate. Inmates cooperated 56% of the time.

On a pair basis, only 13% of student pairs managed to get the best mutual outcome and cooperate, whereas 30% of prisoners do.

In the sequential game, far more students (63%) cooperate, so the mutual cooperation rate skyrockets to 39%. For prisoners, it remains about the same.”

A several things may be going on:
1. Defecting on your fellow prisoners may have really negative consequences that college students don’t face.
2. Prisoners may identify strongly with each other as fellow prisoners.
3. Prisoners may be united by some form of hatred for the people keeping them in prison, leading them to cooperate with each other over outsiders even when they don’t like each other.
4. Prisoners may have been through enough bad crap already in their lives that the promise of a few cigarettes seems trivial and not worth defecting over.
5. Prisoners are drawn disproportionately from a population that happens to have strong norms or instincts about not defecting.
6. College students are jerks.