# 2+2 is 4 and the world is flat: assumptions and approximations

The argument that “2+2=4” is a social construct not found in every society and that in some places 2+2=5 is an interesting exercise in sophistry.

It is true that you can redefine every part of an equation (or every word in a sentence) to mean something other than what a naive reader would normally assume based on all previous experience with words. Obviously if we redefine 2 to mean something other than 2, + to mean something other than addition, or chose to use a base other than base 10, then we can get an answer other than 4. For example, if we are using base-3, then 2+2 = 11. (Of course, when we convert back to base ten, “11” becomes plain old 4 again.)

This is true of every sentence: if I redefine all of the words in a sentence to mean something else, then the sentence means something else–but no one uses language in this way because it makes communication impossible.

In particular, when children are taught that 2+2=4, they being taught within a system where 2 means two of something, 4 means four of something, and + means conventional addition. When we use these definitions, 2 + 2 always equals 4. There are, in fact, zero societies on Earth where this equation, as used in elementary schools, comes out to five.

There is no mistake involved in assuming that other people are using common conventions when speaking and will specify when using terms in unexpected ways. This is how all communication works. Since we cannot define all words from first principles every time we use them (this is impossible because it would require us to define the words used to define the words used to define the words, etc,) we only bother to define them when using them in unconventional ways, and even then we use conventionally defined words to define them. If a word is not defined or otherwise marked as being used in unconventional ways, then the receiver assumes that it is being used in its conventional sense because language cannot function otherwise.

Behind the scenes trickery is simply that: trickery. The sentence as normally defined and automatically understood is always correct.

There is a story about the time Denis Diderot visited the court of Catherine the Great in Russia. Diderot’s atheism offended the great lady, so she had her court mathematician, Leonhard Euler, confront Diderot with a complicated algebraic equation, then proclaimed that this proved the existence of God.

The tale is perhaps apocryphal, but it has inspired the coining of the term “Eulering”: the use of a complicated argument to confuse your opponent into conceding, especially on some irrelevant point. Modern Eulering often consists of saying something that sounds blatantly false, then when this is pointed out, ridiculing your opponent for not knowing that you had secretly redefined the words. If you were a real expert, the argument goes, then you would know that 10=2 is just as valid as 10=10, because base ten is just a social convention we use to make writing numbers easier, and all other bases–including base pi–are equally legitimate from a mathematical perspective.

This is not expertise, but sophistry. There is no mistake involved in assuming that other people are using common conventions when speaking and will specify when using terms in unexpected ways. This is how all communication works.

It is true that math as taught to children is simplified: all subjects are simplified by necessity for introductory students.

When a child learns to read, he is first taught to pronounce the letters phonetically; complications like “silent e” and “-tion” are only introduced later. The full complexity of English spelling, from rhythm to pterodactyl, is only revealed to advanced students who have already mastered simpler words. If we attempt to reverse the order of instruction, chaos results: students are forced to learn every single word independently, instead of applying general rules that get them through most of the words and help them develop further rules for the exceptions.

The same happens in math; children are taught to count and add with the help of simplifying assumptions like “triangles are flat” and “base 10.” You don’t teach a toddler to count by beginning with -10 and then explaining that “3” is written as “11” in base two. It doesn’t work.

When you learn physics, you begin with Newtonian dynamics, because these are easy to demonstrate at normal human scales. It is only after mastering the basics of F=ma, objects falling at 9.8 m/s^2, and maybe a bit of calculus that you move on to topics like “What happens when you move close to the speed of light?” or “What happens at the atomic scale?”

Subjects are taught in a particular order that equips students with general rules that work in most situations, then specific rules that cover the most common exceptions. Most people will never need to know the “expert level” versions of most fields. For example, most people do not need to understand why airplanes can fly in order to make a reservation at the airport and go on a trip: it is sufficient to know simply that planes fly.

To argue about whether the “basic” or “expert” versions of these fields is more correct  generally misses the point: each serves a specific purpose. If I am calculating the distance between my house and my friend’s, I do not need to factor in the curvature of the Earth; if I am calculating the distance between my house and the antipodes, I do. If I am balancing my checkbook, I can safely assume that all of the numbers are written in base-10; if I am trying to figure out if the 16-bit integer limit will make my airplane crash, then it helps to know binary. If I tell my kids to “stay still so I can take your picture,” I don’t want to hear that Brownian motion technically makes it impossible to hold still.

The current bruhaha on Twitter over whether “2+2=4” is racist or not is half math geeks happy to finally have an audience for their discussion of obscure math things and half “school reformers” who wouldn’t know ring addition if it hit them in the face but want to claim that it has something to do with early elementary math. (Spoiler: it doesn’t.)

In the case of math, yes, math is a social construct, inasmuch as we could use a different numerical base or different wiggly symbols to represent the numbers on paper. Spelling is also a social construct: there is no particular reason why “C” should be pronounced the way it is, much less should we have a silent “L” in “could” (the L in could is actually the result of a centuries-old spelling mistake: “would” and “should” both contain Ls because they are forms of the words “will” and “shall,” which contain Ls. Could is derived from the word “can,” which does not have an L, but because “coud” sounds like “would” and “should,” people just started sticking an erroneous L in there, and we’ve been doing it for so long that it’s stuck). Money is also socially constructed: there is no particular reason why little green pieces of paper should have any value, and in many cases (lost in the woods, hyperinflation, visiting a foreign country), they don’t. Nevertheless, you need to be able to count, spell, and use money to get along in society, which we live in. If math is racist simply because it is socially constructed, then so are all other social constructs. Pennies are racist. Silent “e” is racist.

This absurdity is no accident:

h/t @hollymathnerd, quote by Shraddha Shirude, “ethnomathematics” teacher.

# Race: The Social Construction of Biological Reality

Note: This post contains a lot of oversimplification for the sake of explaining a few things. (Yes, I am still meditating on the greater Asian clade.)

Imagine you’re driving down a long highway that stretches from Nigeria to Beijing, passing through Berlin and New Delhi. In reality this route takes some large twists and turns, but we’re drawing it as a straight line, for all maps must simplify to be useful.

As you drive along, you pass many houses along the way–sometimes just a few clustered next to the highway, sometimes small towns, sometimes megalopolises with billions of people.

Our drive begins in one such megalopolis, that of Sub-Saharan Africa (SSA.) Here we meet people like Queen Anna Nzinga, author Chinua Achebe, and–though they have traveled far abroad–African Americans like Oprah Winfrey and Martin Luther King.

Though thousands of different languages are spoken by the thousands of different groups throughout SSA, we may still note a certain physical similarity among them–dark skin perfectly adapted to the equator’s strong sun, dark eyes, and tightly curling hair. While there is a tremendous amount of variety here–probably the most of any megalopolis in the world–they are also, quite clearly, related. You don’t have to go measuring skulls to figure that out.

But as we drive north, the houses thin out. Suddenly we are in a zone with almost no people–an enormous desert: the Sahara.

We speed through this harsh, empty landscape on a starry night, spotting only a few camels in the distance. We’re lucky we have a full tank of gas and several more in the trunk–for all but the most intrepid of our pre-automobile ancestors, this desert was nigh impassible, a vast barrier to human movement.

Finally we reach the vast inland sea of the Mediterranean, and the beginning of our second megalopolis. Most of the people here, from Berbers to Egyptians, have their own distinct look, more similar to their neighbors from the Middle East and Southern Europe than their neighbors to the south, across the inhospitable expanse of sand.

While there are many different countries and languages, no clear phenotypic line separates the people of Northern Africa, the Middle East, southern Europe, or northern Europe. Skin pales, hair lightens and becomes wavy, eyes turn a variety of hues as one nationality melts into the next. North-central Europe is the only place in the world where blue/green/hazel eyes and blond hair are common in adults; even in Wales, dark hair is dominant.

We hang a right through Turkey, Iran, Pakistan, and India, teeming with people.  Here again, though the people change and there are barriers like the Thar Desert, we find no harsh, nigh-impenetarable breaks like the Sahara.

Then, suddenly, we run smack into a wall, a natural wall of majestic proportions: the Himalayas. Beyond lie the Tibetan Plateau, Gobi Desert, and the vast emptiness of the Asian steppe. If this land was ever densely populated, generations of marauding steppe warriors have wiped them out. We see a few people here–aptly named Tibetan lamas, flocks of sheep grazing beside a scattering of yurts. Mongolia holds the distinction of being the world’s least densely populated independent country. (Ice-covered Greenland is even less dense, but owned by Denmark.)

Finally we pass beyond the shadow of the Great Khan’s memorial and into the valley of the Yellow River, where we find our third megalopolis: east Asia.

There is notably less genetic diversity here than in the first megalopolis–indeed, 93% of Han Chinese share a particular variety of the EDAR allele:

A derived G-allele point mutation (SNP) with pleiotropic effects in EDAR, 370A or rs3827760, found in most modern East Asians and Native Americans but not common in African or European populations, is thought to be one of the key genes responsible for a number of differences between these populations, including the thicker hair, more numerous sweat glands, smaller breasts, and dentition characteristic of East Asians.[7] … The 370A mutation arose in humans approximately 30,000 years ago, and now is found in 93% of Han Chinese and in the majority of people in nearby Asian populations.

Here, too, skin tones vary from north to south, though not as greatly as they do closer to the Greenwich Meridian. Most people have dark eyes, slim frames, and straight, smooth black hair.

Here in the megalopolis made famous by Beijing, Shanghai, Hong Kong, Seoul, and Tokyo, we have come to the end of the–first round–of our journey. From Africa to Asia, we have found three vast areas filled with people, and two major barriers which–though not completely impassible–have hindered humanity’s footsteps over the millennia.

People sometimes try to claim that human races do not exist simply because edge cases exist, small, scattered groups which possess a mixture of genes common to both Sub-Saharan Africans and Caucasians, Caucasians and Asians. And these groups do in fact exist, and are fascinating in their own rights. But these groups are also small, often living in extremely harsh, forbidding lands where few humans can survive (The inhabitants of the Himalayas and Tibetan plateau, I note, actually carry a gene that helps them survive at high altitudes which they received via an ancestor’s dalliance with a Denisovan hominin–the Denisovans were cousin to the Neanderthals and lived in Asia long before Homo Sapiens. No one else in the world carries this gene, so if you don’t have it, good luck living up there!)

But the vast, vast majority of the world’s people do not live in these harsh and unforgiving lands. They live clustered together in the enormous population centers, continually mixing, migrating, churning, and conquering each other, not people thousands of miles off. The concept of race stems from this basic observation of the geography of human settlements.

Physical distance is genetic distance, but since my diagram is only two-dimensional, it can only show the genetic distance between two points at a time. The genetic distance between Asians and Caucasians is about 40k years–much shorter than the distance between Sub-Saharan Africans and Caucasians, 70k years. But the distance between Sub-Saharan Africans and Asians is also about 70k years. Although Asians and Caucasians split apart from each other about 40,000 years ago, they are both descended from a single group of ancestors (a handful of Denisovans and Neanderthals excluded) who left sub-Saharan Africa about 70k years ago. We may best think of the relationship between these three groups not as a single highway, but as a triangle with two sides 70k long and one side of 40k. But to accurately add more groups to our journey, (as we shall do on Thursday), we would have to keep adding dimensions, and we are aiming here for simplicity, not n-dimensional hypercubes.

# Rumor, Outrage, and “Fake News”

Back when I started this blog, I had high hopes that the internet would allow people to bring together more and more information, resulting in an explosion of knowledge I referred to as the “Great Informationing.” To some extent, services like Google and Wikipedia have already started this ball rolling by essentially creating searchable databases of crowd-sourced data on a scale and at a speed never known before in human history–indeed, this blog would be much more limited in scope could I not look up at a moment’s notice almost anything I desire to learn.

In the past year, though, I have become disillusioned. While the internet does put a great deal of information at my fingertips, it also puts a great deal of misinformation at my fingertips.

Rumor flies halfway around the world before Truth has got its pants on.–variously misattributed

It’s bad enough to try to delve into subjects where I don’t speak the correct language to read most of the sources and thus can’t even begin properly searching. It’s even worse if the news I am getting isn’t reliable.

There’s been a lot of talk lately about “fake news.” I’m not sure which sites, exactly, have been promoting “fake news,” but I noticed toward the tail end of the election a seeming proliferation of websites and news sources I’d never heard of before. Clicking on these links generally led me to a site plastered with adds and images (which had a high probability of instantly crashing my computer) and headlines that looked lifted from other sources.

Since noticing this trend, I’ve tried to avoid linking to or trusting any headline that comes from a site I don’t recognize on the grounds that I have no way to confirm whether they are trustworthy, and further, I don’t like having my computer crash. The downside to this policy is that the internet is vast and I certainly do not know every respectable site out there.

I noticed some time ago that even “respectable” papers like the WaPo and NYTimes had quite a lot of one-sided or otherwise questionable reporting. Lies and more Lies were another theme that got hounded a lot in the early stages of this blog, but my focus was more on society than the media. Since reading a lot of iSteve, however, I’ve grown more sensitive to the ways media shape narratives, especially via what they chose to report and chose to remain silent on.

When you realize that there are stories the media isn’t commenting on, or is giving you a particular spin on, what do you do?

Look for other sources, I guess.

Last summer I noticed prominent papers printing not just mistakes or one-sided stories, but outright false statements that could only have made it into print because someone purposefully decided to make them up. (For privacy reasons I’m not going into more details, but you can probably supply your own cases.)

There are a variety of things going on with the media, but the internet, sadly, appears to be making matters worse.

It’s no secret that traditional print media has had a rough time since the information super highway started jazzing up our lives.

I remember when Borders first opened in my neighborhood. I loved that place. I’d bike over there and spend endless hours browsing the shelves, especially during the summer. I found my first anthropology books there.

And I remember when the Borders went out. The empty husk of the building is still there, unoccupied. It’s been empty for years. I wonder what on Earth is wrong with the person who owns that spot. Can’t they find someone to rent it to?

Newspapers have also suffered; with dwindling subscriptions, they’ve simultaneously cut everyone with enough expertise to demand a high salary and turned to generating click-driving content.

When you have subscribers who actually pay for newspapers, they value thoughtful, high-quality reporting. (Otherwise, what are you spending all of that money on?) When readers are just clicking through, outrage drives the news cycle. Articles don’t even have to be about something outrageous–the article itself can be the outrageous thing, so long as people link to it and say, “OMG, can you believe they wrote this?”

Every hate click makes things worse.

The outrage machine is helping drive the SJW-fueled obsession with “identity politics,” particularly feminism, anti-racism, and LGBT issues. This isn’t the first time this style of political correctness has broken out–remember the much-mocked silliness of the late 80s? But back then, only the National Enquirer could hope to use stories about transgender elementary school kids to sell papers. Now everyone can.

It’s bad enough being the kind of person who worries about whether or not the division between “tree” and “bush” is just a social construct, or the basic unknowablity of what one doesn’t know.

But now we have to consider the effects of hate-clicks and outrage on everything we know.

# Femininity as Fashion

My androgyny theory run up against the obvious complication of how you measure androgyny/dimorphism. Height? Hormones? Behavior? The latter is obviously affected by a ton of environmental factors.

Slate Star Codex has an excellent post analyzing fashion (and politics) via cellular automata. Other people have written really insightful things using this same model, so I recommend you shoving it into whatever spare theories you have lying around.

BTW, if you don’t know what I’m talking about, you should read Scot’s post before finishing mine.

Anyway, does the performance of femininity itself follow this model?
I propose yes.

Let’s go back to 1900 or so. Most people are farmers, and farmers have to work damn hard. The wives of farmers are not delicate wilting flowers, but extremely hard workers themselves, with very little excess time or money to spend on things like closets full of shoes. The traits we associate with femininity and gender role performance were largely luxuries available only to the wealthy, a situation that had probably been largely true for centuries.

Then came industrialization, the shift to the cities, and the rapid growth of the middle class. By the 1920s, the middle class could aspire to ape upper class behaviors, spending their new wealth on clothes and shoes and stay-at-home-motherhood. It is probably no coincidence that at the same time, fashionable women began dressing and acting like men, even aspiring to “boyish” figures.

Then came the Depression and WWII, and people went back to eating spare shoes instead of wearing them. By the fifties, femininity was once again a symbol of luxurious good living, complete with the magical wonders of modern technology like vacuums and Jello.

Of course, as soon as the middle class (and even, god forbid, proles,) started aspiring to vacuum in their pearls, such things became horribly retrograde. Poors might aspire to have enough money that one of them might be able to take off a little time to care for their children, but rich people had much better things to do with their time. No self-respecting career woman would be caught dead in public with a parcel of screaming brats; if they must breed for the sake of some horribly chauvinist husband, the actual care and upkeep of the children must be farmed out to suitably low-class (often non-white) nannies. Nor would she deign to humiliate herself by cooking meals or doing laundry. (Such work can also be done by low-class non-white women, to allow rich white women to keep up their masculine lifestyles.)

(Note: it’s not employing people that’s problematic. It’s believing that certain types of work are beneath you, but perfectly acceptable for other sorts of people. If you think women shouldn’t cook and clean, then don’t hire other women to cook and clean.)

Of course, poors and proles never quite got the message and continued buying their daughters Barbies and Bratz and whatnot, despite all of their betters’ constant harangues about the dire moral dangers of such toys.

As the economy continues to suck and the middle class shrinks, will femininity become again the domain of the super-rich?