Book Club: Chinua Achebe’s Things Fall Apart, pt 2

 

Chinua_Achebe_-_Buffalo_25Sep2008_crop
Chinua Achebe, Author and Nobel Prize winner

Welcome back to our discussion of Chinua Achebe’s Things Fall Apart. Today I thought it would be interesting to look at the history of the Igbo (aka Ibo) people, past and present.

The modern Igbo are one of the world’s larger ethnic groups, numbering around 34 million people, most of whom live in south east/central Nigeria. About 2 million Igbo live abroad, most of them in Britain (which, Chinua recounts, colonized Nigeria.) The Igbo diaspora is well-known for its intelligence, with Igbo students outscoring even Chinese students in the UK:

Although the Chinese and Indians are still very conspicuously above even the best African nationalities, their superiority disappears when the Nigerian and other groups are broken down even further according to their different tribal ethnicities. Groups like the famous Igbo tribe, which has contributed much genetically to the African American blacks, are well known to be high academic achievers within Nigeria. In fact, their performance seems to be at least as high as the “model minority” Chinese and Indians in the UK, as seen when some recent African immigrants are divided into languages spoken at home (which also indicates that these are not multigenerational descendants but children of recent immigrants).

Africans speaking Luganda and Krio did better than the Chinese students in 2011. The igbo were even more impressive given their much bigger numbers (and their consistently high performance over the years, gaining a 100 percent pass rate in 2009!). The superior Igbo achievement on GCSEs is not new and has been noted in studies that came before the recent media discovery of African performance. A 2007 report on “case study” model schools in Lambeth also included a rare disclosure of specified Igbo performance (recorded as Ibo in the table below) and it confirms that Igbos have been performing exceptionally well for a long time (5 + A*-C GCSEs); in fact, it is difficult to find a time when they ever performed below British whites.

Of course, Igbo immigrants to the UK are probably smarter than folks who didn’t figure out how to immigrate to the UK, but Peter Frost argues that even the ones who stayed at home are also pretty smart, via a collection of quotes:

All over Nigeria, Ibos filled urban jobs at every level far out of proportion to their numbers, as laborers and domestic servants, as bureaucrats, corporate managers, and technicians. Two-thirds of the senior jobs in the Nigerian Railway Corporation were held by Ibos. Three-quarters of Nigeria’s diplomats came from the Eastern Region. So did almost half of the 4,500 students graduating from Nigerian universities in 1966. The Ibos became known as the “Jews of Africa,” despised—and envied—for their achievements and acquisitiveness. (Baker, 1980)

The Ibos are the wandering Jews of West Africa — gifted, aggressive, Westernized; at best envied and resented, but mostly despised by the mass of their neighbors in the Federation.(Kissinger, 1969)

So what makes the Igbo so smart? Frost attributes their high IQ to the selective effects of an economy based on trade in which the Igbo were middlemen to the other peoples (mostly Yoruba and Fulani) around them, along with an excellent metalworking tradition:

Archaeological sites in the Niger Delta show that advanced economic development began much earlier there than elsewhere in West Africa. This is seen in early use of metallurgy. At one metallurgical complex, dated to 765 BC, iron ore was smelted in furnaces measuring a meter wide. The molten slag was drained through conduits to pits, where it formed blocks weighing up to 43-47 kg. …

This production seems to have been in excess of local needs and therefore driven by trade with other peoples …

This metallurgy is unusual not only in its early date for West Africa but also in its subsequent development, which reached a high level of sophistication despite a lack of borrowing from metallurgical traditions in the Middle East and Europe.

Here is a fun little video on Igbo bronzes (I recommend watching on double speed and pausing occasionally to appreciate the work quality):

So between the bronze, the river, and long-distance trade, the Igbo became the local market dominant minorities–and like most MDMs, with the arrival of democracy came genocide.

Nigeria achieved independence in 1960 and became a Republic in 1963. Periodic military coups and conflict kept disturbing the peace:

From June through October 1966, pogroms in the North killed an estimated 80,000 to 100,000 Igbo, half of them children, and caused more than a million to two million to flee to the Eastern Region.[76] 29 September 1966, was considered the worst day; because of massacres, it was called ‘Black Thursday’.[77][78]

Ethnomusicologist Charles Keil, who was visiting Nigeria in 1966, recounted:

The pogroms I witnessed in Makurdi, Nigeria (late Sept. 1966) were foreshadowed by months of intensive anti-Ibo and anti-Eastern conversations among Tiv, Idoma, Hausa and other Northerners resident in Makurdi, and, fitting a pattern replicated in city after city, the massacres were led by the Nigerian army. Before, during and after the slaughter, Col. Gowon could be heard over the radio issuing ‘guarantees of safety’ to all Easterners, all citizens of Nigeria, but the intent of the soldiers, the only power that counts in Nigeria now or then, was painfully clear. After counting the disemboweled bodies along the Makurdi road I was escorted back to the city by soldiers who apologised for the stench and explained politely that they were doing me and the world a great favor by eliminating Igbos.

… until the Igbos decided they’d had enough and declared themselves an independent country, Biafra, triggering a civil war. The Nigerian and British governments blockaded Biafra, resulting in mass starvation that left nearly 2 million dead.

Why the British government thought it was important to use money to starve children, I don’t know.

(Hint: the answer is oil.)

During the war, Britain covertly supplied Nigeria with weapons and military intelligence and may have also helped it to hire mercenaries.[102] After the decision was made to back Nigeria, the BBC oriented its reporting to favour this side.[103] Supplies provided to the Federal Military Government included two vessels and 60 vehicles.[104]

Go BBC!

(Richard Nixon, always the voice of morality, was against the blockade.)

Chinua Achebe published Things Fall Apart in 1959, the year before independence–so he was awfully prescient.

As news got out about the genocide, people began demanding that food be airlifted into Biafra, but there were some problems getting the food off the ground:

800px-Starved_girl
Just “enemy propaganda” of little girls starving to death

Secretary General of the United Nations, U Thant, refused to support the airlift.[5] The position of the Organization of African Unity was to not intervene in conflicts its members’ deemed internal and to support the nation-state boundaries instituted during the colonial era.[6] The ruling Labour Party of the United Kingdom, which together with the USSR was supplying arms to the Nigerian military,[7]dismissed reports of famine as “enemy propaganda”.[8] Mark Curtis writes that the UK also reportedly provided military assistance on the ‘neutralisation of the rebel airstrips’, with the understanding that their destruction would put them out of use for daylight humanitarian relief flights.[9]

Soon Joint Church Aid, a combination of Protestant, Catholic, Jewish, and other NG organizations, chartered airplanes and began running the Nigerian blockade (three relief workers were killed when their plane was shot down.) The Biafra Airlift is second in scope only to the Berlin Airlift in non-combat airlifts.

Since the end of the war, the state of the Igbo people has steadily improved (the presence of oil in the area is finally benefiting them.)

Modern Nigeria has about 200 million people with a fertility rate around 5.5 children per woman (I don’t have data specifically about the Igbo) and a per capita GDP around $2,000, which is high for the area.

It’s getting late, so I’d like to end with some modern Igbo music, a reminder that the people we read about in anthropology books (or literature) never stay in anthropology books:

Advertisement

What’s to be done with the dumb?

Society seems split into two camps on the matter of intelligence. Side A believes that everyone is secretly smart, but for a variety of reasons (bad teachers, TV, racism, sexism, etc) their true intelligence isn’t showing. Side B believes that some people really are stupid, because they are bad people, and they therefore deserve to suffer.

Out in reality, however, there are plenty of good, decent people who, through no fault of their own, are not smart.

I’m not making my usual jest wherein I claim that about 75% people are morons. I am speaking of the bottom 40% or so of people who have no particular talents or aptitudes of use in the modern economy. For any job that isn’t pure manual labor, they will almost always be competing with candidates who are smarter, quicker, or better credentialed than they are. Life itself will constantly present them with confusing or impenetrable choices–and it will only get worse as they age.

The agricultural economy–which we lived in until 7 decades ago, more or less–could accommodate plenty of people of modest intellects so long as they were hard-working and honest. A family with a dull son or daughter could, if everyone liked each other, still find a way for them to contribute, and would help keep them warm and comfortable in turn.

When you own your own business, be it a farm or otherwise, you can employ a relative or two. When you are employed by someone else, you don’t have that option. Back in the early 1800s, about 80% of people were essentially self-employed or worked on family farms. Today, about 80% of people are employees, working for someone else.

Agriculture is now largely mechanized, and most of the other low-IQ jobs, whether in stores or factories, are headed the same direction. Self-driving cars may soon replace most of the demand for cabbies and truckers, while check-out kiosks automate retail sales. I wouldn’t be surprised to see whole restaurants that are essentially giant vending machines with tables, soon.

The hopeful version of this story says that for every job automated, a new one is created. The invention of the tractor and combine didn’t put people out of work; the freed-up agricultural workers moved to the city and started doing manufacturing jobs. Without automation in the countryside we couldn’t have had so many factories because there would have been no one to work them. Modern automation therefore won’t put people out of jobs, long-term, so much as enable them to work new jobs.

The less hopeful point of view says that we are quickly automating all of the jobs that dumb people can do, and that the new economy requires significantly more intelligence than the old. So, yes, there are new jobs–but dumb people can’t do them.

If the pessimistic view is correct, what options do we have? People are uncomfortable with just letting folks starve to death. We already have Welfare. This seems suboptimal, and people worry that many of those who receive it aren’t virtuously dumb, but crafty and lazy. Makework jobs are another option. If not awful, they can let people feel productive and like they’ve earned their income, but of course they can be awful, and someone else has to make sure the fake job doesn’t result in any real damage. (If they could work unsupervised, they wouldn’t need fake jobs.) Our economy already has a lot of fake jobs, created to make it look like we’re all busy adults doing important things and prevent the poor from burning down civilization.

People have been floating UBI (universal basic income) as another solution. Basically, all of the benefits of welfare without all of the complicated paperwork or the nagging feeling that some lazy bum is getting a better deal than you because everyone gets the exact same deal.

UBI would ideally be offset via an increase in sales taxes (since the money is initially likely to go directly to consumption) to avoid hyperinflation. This is where we get into “modern monetary theory,” which basically says (I think) that it doesn’t really matter whether the gov’t taxes and then spends or spends and then taxes so long as the numbers balance in the end. Of course, this is Yang’s big presidential idea. I think it’s a fascinating idea (I’ve been tossing it around but haven’t had a whole lot to say about it for about fifteen years) and would love to see the independent nation of California or Boston try it out first.

UBI doesn’t exactly solve the problem of the dumb–who still need help from other people to not get scammed by Nigerian princes–but it could simplify and thus streamline our current system, which is really quite unwieldy.

Thoughts?

Is there any reliable way to distinguish between low IQ and insanity? 

I see claims like this surprisingly often:

Of course there are smart people who are insane, and dumb people who are completely rational. But if we define intelligence as having something to do with accurately understanding and interpreting the information we constantly receive from the world, necessary to make accurate predictions about the future and how one’s interactions with others will go, there’s a clear correlation between accurately understanding the world and being sane.

In other words, a sufficiently dumb person, even a very sane one, will be unable to distinguish between accurate and inaccurate depictions of reality and so can easily espouse beliefs that sound, to others, completely insane.

Is there any way to distinguish between a dumb person who believes wrong things by accident and a smart person who believes wrong things because they are insane?

Digression: I have a friend who was homeless for many years. Eventually he was diagnosed as mentally ill and given a disability check.

“Why?” he asked, but received no answer. He struggled (and failed) for years to prove that he was not disabled.

Eventually he started hearing voices, was diagnosed with schizophrenia, and put on medication. Today he is not homeless, due at least in part to the positive effects of anti-psychotics.

The Last Psychiatrist has an interesting post (deleted from his blog, but re-posted elsewhere,) on how SSI is determined:

Say you’re poor and have never worked. You apply for Welfare/cash payments and state Medicaid. You are obligated to try and find work or be enrolled in a jobs program in order to receive these benefits. But who needs that? Have a doctor fill out a form saying you are Temporarily Incapacitated due to Medical Illness. Yes, just like 3rd grade. The doc will note the diagnosis, however, it doesn’t matter what your diagnosis is, it only matters that a doctor says you are Temporarily Incapacitated. So cancer and depression both get you the same benefits.

Nor does it matter if he medicates you, or even believes you, so long as he signs the form and writes “depression.”(1) The doc can give you as much time off as he wants (6 months is typical) and you can return, repeatedly, to get another filled out. You can be on state medicaid and receive cash payments for up to 5 years. So as long as you show up to your psych appointments, you’ll can receive benefits with no work obligation.

“That’s not how it works for me”

you might say, which brings us to the whole point: it’s not for you. It is for the entire class of people we label as poor, about whom comic Greg Geraldo joked: “it’s easy to forget there’s so much poverty in the United States, because the poor people look just like black people.” Include inner city whites and hispanics, and this is how the government fights the War On Poverty.

In the inner cities, the system is completely automated. Poor person rolls in to the clinic, fills out the paperwork (doc signs a stack of them at the end of the day), he sees a therapist therapist, a doctor, +/- medications, and gets his benefits.

There’s no accountability, at all. I have never once been asked by the government whether the person deserved the money, the basis for my diagnosis– they don’t audit the charts, all that exists is my sig on a two page form. The system just is.

atlantahomeless
see if you can find the one poor person hidden in this picture (Last Psychiatrist)

Enter SSI, Supplemental Security Income. You can earn lifetime SSI benefits (about $600/mo + medical insurance) if “you” can “show” you are “Permanently Disabled” due to a “medical illness.”
You“= your doc who fills out a packet with specific questions; and maybe a lawyer who processes the massive amounts of other paperwork, and argues your case, and charges about 20% of a year’s award.

show” has a very specific legal definition: whatever the judge feels like that day. I have been involved in thousands of these SSI cases, and to describe the system as arbitrary is to describe Blake Lively as “ordinary.”

Permanently disabled” means the illness prevents you from ever working. “But what happens when you get cured?” What is this, the future? You can’t cure bipolar.

Medical illness” means anything. The diagnosis doesn’t matter, only that “you” show how the diagnosis makes it impossible for you to work. Some diagnoses are easier than others, but none are impossible. “Unable to work” has specific meaning, and specific questions are asked: ability to concentrate, ability to complete a workweek, work around others, take criticism from supervisors, remember and execute simple/moderately difficult/complex requests and tasks, etc.

Fortunately, your chances of being awarded SSI are 100%…

It’s a good post. You should read the whole thing.

TLP’s point is not that the poor are uniformly mentally ill, but that our country is using the disability system as a means of routing money to poor people in order to pacify them (and maybe make their lives better.)

I’ve been playing a bit of sleight of hand, here, subbing in “poor” and “dumb.” But they are categories that highly overlap, given that dumb people have trouble getting jobs that pay well. Despite TLP’s point, many of the extremely poor are, by the standards of the middle class and above, mentally disabled. We know because they can’t keep a job and pay their bills on time.

“Disabled” is a harsh word to some ears. Who’s to say they aren’t equally able, just in different ways?

Living under a bridge isn’t being differently-abled. It just sucks.

Normativity bias happens when you assume that everyone else is just like you. Middle and upper-middle class people tend to assume that everyone else thinks like they do, and the exceptions, like guys who think the CIA is trying to communicate with them via the fillings in their teeth, are few and far between.

As for the vast legions of America’s unfortunates, they assume that these folks are basically just like themselves. If they aren’t very bright, this only means they do their mental calculations a little slower–nothing a little hard work, grit, mindfulness, and dedication can’t make up for. The fact that anyone remains poor, then, has to be the fault of either personal failure (immorality) or outside forces like racism keeping people down.

These same people often express the notion that academia or Mensa are crawling with high-IQ weirdos who can barely tie their shoes and are incapable of socializing with normal humans, to which I always respond that furries exist. 

These people need to get out more if they think a guy successfully holding down a job that took 25 years of work in the same field to obtain and that requires daily interaction with peers and students is a “weirdo.” Maybe he wears more interesting t-shirts than a middle manager at BigCorp, but you should see what the Black Hebrew Israelites wear.

I strongly suspect that what we would essentially call “mental illness” among the middle and upper classes is far more common than people realize among the lower classes.

As I’ve mentioned before, there are multiple kinds of intellectual retardation. Some people suffer physical injuries (like shaken baby syndrome or encephalitis), some have genetic defects like Down’s Syndrome, and some are simply dull people born to dull parents. Intelligence is part genetic, so just as some people are gifted with lucky smart genes, some people are visited by the stupid fairy, who only leaves dumb ones. Life isn’t fair.

Different kinds of retardation manifest differently, with different levels of overall impairment in life skills. There are whole communities where the average person tests as mentally retarded, yet people in these communities go providing for themselves, building homes, raising their children, etc. They do not do so in the same ways as we would–and there is an eternal chicken and egg debate about whether the environment they are raised in causes their scores, or their scores cause their environment–but nevertheless, they do.

All of us humans are descended from people who were significantly less intelligent than ourselves. Australopithecines were little smarter than chimps, after all. The smartest adult pygmy chimps, (bonobos) like Kanzi, only know about 3,000 words, which is about the same as a 3 or 4 year old human. (We marvel that chimps can do things a kindergartener finds trivial, like turn on the TV.) Over the past few million years, our ancestors got a lot smarter.

How do chimps think about the world? We have no particular reason to assume that they think about it in ways that substantially resemble our own. While they can make tools and immediately use them, they cannot plan for tomorrow (dolphins probably beat them at planning.) They do not make sentences of more than a few words, much less express complex ideas.

Different humans (and groups of humans) also think about the world in very different ways from each other–which is horrifyingly obvious if you’ve spent any time talking to criminals. (The same people who think nerds are weird and bad at socializing ignore the existence of criminals, despite strategically moving to neighborhoods with fewer of them.)

Even non-criminals communities have all sorts of strange practices, including cannibalism, human sacrifice, wife burning, genital mutilation, coprophagy, etc. Anthropologists (and economists) have devoted a lot of effort to trying to understand and explain these practices as logical within their particular contexts–but a different explanation is possible: that different people sometimes think in very different ways.

For example, some people think there used to be Twa Pygmies in Ireland, before that nefarious St. Patrick got there and drove out all of the snakes. (Note: Ireland did’t have snakes when Patrick arrived.)

(My apologies for this being a bit of a ramble, but I’m hoping for feedback from other people on what they’ve observed.)

Book Club: The 10,000 Year Explosion pt 7: Finale

 

Niels Bohr
Niels Bohr: 50% Jewish, 100% Quantum Physics

Chapter 7 of The 10,000 Year Explosion is about the evolution of high Ashkenazi IQ; chapter 8 is the Conclusion, which is just a quick summary of the whole book. (If you’re wondering if you would enjoy the book, try reading the conclusion and see if you want to know more.)

This has been an enjoyable book. As works on human evolution go, it’s light–not too long and no complicated math. Pinker’s The Blank Slate gets into much more philosophy and ethics. But it also covers a lot of interesting ground, especially if you’re new to the subject.

I have seen at least 2 people mention recently that they had plans to respond to/address Cochran and Harpending’s timeline of Jewish history/evolution in chapter 7. I don’t know enough to question the story, so I hope you’ll jump in with anything enlightening.

The basic thesis of Chapter 7 is that Ashkenazi massive over-representation in science, math, billionaires, and ideas generally is due to their massive brains, which is due in turn to selective pressure over the past thousand years or so in Germany and nearby countries to be good at jobs that require intellect. The authors quote the historian B. D. Weinryb:

More children survived to adulthood in affluent families than in less affluent ones. A number of genealogies of business leaders, prominent rabbis, community leaders, and the like–generally belonging to the more affluent classes–show that such people often had four, six, sometimes even eight or nice children who reached adulthood…

800px-Niels_Bohr_Albert_Einstein_by_Ehrenfest
Einstein and Bohr, 1925

Weinryb cites a census of the town of Brody, 1764: homeowner household had 1.2 children per adult; tenant households had only 0.6.

As evidence for this recent evolution, the authors point to the many genetic diseases that disproportionately affect Ashkenazim:

Tay-Sachs disease, Gaucher’s disease, familial dysautonomia, and two different forms of hereditary breast cancer (BRCA1 and BRCA2), and these diseases are up to 100 times more common in Ashkenazi Jews than in other European populations. …

In principle, absent some special cause, genetic diseases like these should be rare. New mutations, some of which have bad effects, appear in every generation, but those that cause death or reduced fertility should be disappearing with every generation. … one in every twenty-five Ashkenazi Jews carries a copy of the Tay-Sachs mutation, which kills homozygotes in early childhood. This is an alarming rate.

What’s so special about these diseases, and why do the Ashkenazim have so darn many of them?

Some of them look like IQ boosters, considering their effects on the development of the central nervous system. The sphingolipid mutations, in particular, have effects that could plausibly boost intelligence. In each, there is a buildup of some particular sphingolipid, a class of modified fat molecules that play a role in signal transmission and are especially common in neural tissues. Researchers have determined that elevated levels of those sphingolipids cause the growth of more connections among neurons..

There is a similar effect in Tay-Sachs disease: increased levels of a characteristic storage compound… which causes a marked increase in the growth of dendrites, the fine branches that connect neurons. …

We looked at the occupations of patients in Israel with Gaucher’s disease… These patients are much more likely to be engineers or scientists than the average Israeli Ashkenazi Jew–about eleven times more likely, in fact.

Einstein_oppenheimer
Einstein and Oppenheimer, Father of the Atomic Bomb, c. 1950

Basically, the idea is that similar to sickle cell anemia, being heterozygous for one of these traits may make you smarter–and being homozygous might make your life considerably shorter. In an environment where being a heterozygous carrier is rewarded strongly enough, the diseases will propagate–even if they incur a significant cost.

It’s a persuasive argument.

I’d like to go on a quick tangent to Von Neumann’s Wikipedia page:

Von Neumann was a child prodigy. When he was 6 years old, he could divide two 8-digit numbers in his head [14][15] and could converse in Ancient Greek. When the 6-year-old von Neumann caught his mother staring aimlessly, he asked her, “What are you calculating?”[16]

Children did not begin formal schooling in Hungary until they were ten years of age; governesses taught von Neumann, his brothers and his cousins. Max believed that knowledge of languages in addition to Hungarian was essential, so the children were tutored in English, French, German and Italian.[17] By the age of 8, von Neumann was familiar with differential and integral calculus,[18] but he was particularly interested in history. He read his way through Wilhelm Oncken‘s 46-volume Allgemeine Geschichte in Einzeldarstellungen.[19] A copy was contained in a private library Max purchased. One of the rooms in the apartment was converted into a library and reading room, with bookshelves from ceiling to floor.[20]

JohnvonNeumann-LosAlamos
Von Neumann

Von Neumann entered the Lutheran Fasori Evangélikus Gimnázium in 1911. Wigner was a year ahead of von Neumann at the Lutheran School and soon became his friend.[21] This was one of the best schools in Budapest and was part of a brilliant education system designed for the elite. Under the Hungarian system, children received all their education at the one gymnasium. Despite being run by the Lutheran Church, the school was predominately Jewish in its student body [22] The school system produced a generation noted for intellectual achievement, which included Theodore von Kármán (b. 1881), George de Hevesy (b. 1885), Leó Szilárd (b. 1898), Dennis Gabor (b. 1900), Eugene Wigner (b. 1902), Edward Teller (b. 1908), and Paul Erdős (b. 1913).[23] Collectively, they were sometimes known as “The Martians“.[24] 

One final thing in The 10,000 Year Explosion jumped out at me:

There are also reports of individuals with higher-than-average intelligence who have nonclassic congenital adrenal hyperplasia (CAH)… CAH, which causes increased exposure of the developing fetus to androgens (male sex hormones), is relatively mild compared to diseases like Tay-Sachs. At least seven studies show high IQ in CAH patients, parents, and siblings, ranging from 107 to 113. The gene frequency of CAH among the Ashkenazim is almost 20 percent.

Holy HBD, Batman, that’ll give you a feminist movement.

If you haven’t been keeping obsessive track of who’s who in the feminist movement, many of the early pioneers were Jewish women, as discussed in a recent article by the Jewish Telegraph Agency, “A History of the Radical Jewish Feminists and the one Subject they Never Talked About“:

Heather Booth, Amy Kesselman, Vivian Rothstein and Naomi Weisstein. The names of these bold and influential radical feminists may have faded in recent years, but they remain icons to students of the women’s liberation movement …

The Gang of Four, as they dubbed themselves, were among the founders of Chicago’s Women’s Liberation Union. …

Over weeks, months and years, no subject went unturned, from the political to the sexual to the personal. They were “ready to turn the world upside down,” recalled Weisstein, an influential psychologist, neuroscientist and academic who died in 2015.

But one subject never came up: the Jewish backgrounds of the majority of the group.

“We never talked about it,” Weisstein said.

Betty Friedan was Jewish; Gloria Steinem is half Jewish. There are a lot of Jewish feminists.

Of course, Jews are over-represented in pretty much every intellectual circle. Ayn Rand, Karl Marx, and Noam Chomsky are all Jewish. Einstein and Freud were Jewish. I haven’t seen anything suggesting that Jews are more over-represented in feminism than in any other intellectual circle they’re over-represented in. Perhaps they just like ideas. Someone should come up with some numbers.

Here’s a page on Congenital Adrenal Hyperplasia. The “classic” variety is often deadly, but the non-classic (the sort we are discussing here) doesn’t kill you.

Paul_Erdos_with_Terence_Tao
Paul Erdős with Terrence Tao, 1984 (Tao isn’t Jewish, of course.)

I’ve long suspected that I know so many trans people because some intersex conditions result in smarter brains (in this case, women who are better than average at math.) It looks like I may be on the right track.

Well, that’s the end of the book. I hope you enjoyed it. What did you think? And what should we read next? (I’m thinking of doing Pinker’s Blank Slate.)

The Female Problem

 

800px-otto_hahn_und_lise_meitner
Lise Meitner and Otto Hahn in their laboratory, 1912

As Pumpkin Person reports, 96% of people with math IQs over 154 are male (at least in the early 1980s.) Quoting from  Feingold, A. (1988). Cognitive gender differences are disappearing. American Psychologist, 43(2), 95-103:

When the examinees from the two test administrations were combined, 96% of 99 scores of 800 (the highest possible scaled score), 90% of 433 scores in the 780-790 range, 81% of 1479 scores between 750 and 770, and 56% of 3,768 scores of 600 were earned by boys.

The linked article notes that this was an improvement over the previous gender gap in high-end math scores. (This improvement may itself be an illusion, due to the immigration of smarter Asians rather than any narrowing of the gap among locals.)

I don’t know what the slant is among folks with 800s on the verbal sub-test, though it is probably less–far more published authors and journalists are male than top mathematicians are female. (Language is a much older human skill than math, and we seem to have a corresponding easier time with it.) ETA: I found some data. Verbal is split nearly 50/50 across the board; the short-lived essay had a female bias. Since the 90s, the male:female ratio for scores over 700 improved from 13:1 to 4:1; there’s more randomness in the data for 800s, but the ratio is consistently more male-dominated.

High SAT (or any other sort of) scores is isolating. A person with a combined score between 950 and 1150 (on recent tests) falls comfortably into the middle of the range; most people have scores near them. A person with a score above 1350 is in the 90th%–that is, 90% of people have scores lower than theirs.

People with scores that round up to 1600 are above the 99th%. Over 99% of people have lower scores than they do.

And if on top of that you are a female with a math score above 750, you’re now a minority within a minority–75% or more of the tiny sliver of people at your level are likely to be male.

Obviously the exact details change over time–the SAT is periodically re-normed and revised–and of course no one makes friends by pulling out their SAT scores and nixing anyone with worse results.

But the general point holds true, regardless of our adjustments, because people bond with folks who think similarly to themselves, have similar interests, or are classmates/coworkers–and if you are a female with high math abilities, you know well that your environment is heavily male.

This is not so bad if you are at a point in your life when you are looking for someone to date and want to be around lots of men (in fact, it can be quite pleasant.) It becomes a problem when you are past that point, and looking for fellow women to converse with. Married women with children, for example, do not typically associate in groups that are 90% male–nor should they, for good reasons I can explain in depth if you want me to.

A few months ago, a young woman named Kathleen Rebecca Forth committed suicide. I didn’t know Forth, but she was a nerd, and nerds are my tribe.

She was an effective altruist who specialized in understanding people through the application of rationality techniques. She was in the process of becoming a data scientist so that she could earn the money she needed to dedicate her life to charity.

I cannot judge the objective truth of Forth’s suicide letter, because I don’t know her nor any of the people in her particular communities. I have very little experience with life as a single person, having had the good luck to marry young. Nevertheless, Forth is dead.

At the risk of oversimplifying the complex motivations for Forth’s death, she was desperately alone and felt like she had no one to protect her. She wanted friends, but was instead surrounded by men who wanted to mate with her (with or without her consent.) Normal people can solve this problem by simply hanging out with more women. This is much harder for nerds:

Rationality and effective altruism are the loves of my life. They are who I am.

I also love programming. Programming is part of who I am.

I could leave rationality, effective altruism and programming to escape the male-dominated environments that increase my sexual violence risk so much. The trouble is, I wouldn’t be myself. I would have to act like someone else all day.

Imagine leaving everything you’re interested in, and all the social groups where people have something in common with you. You’d be socially isolated. You’d be constantly pretending to enjoy work you don’t like, to enjoy activities you’re not interested in, to bond with people who don’t understand you, trying to be close to people you don’t relate to… What kind of life is that? …

Before I found this place, my life was utterly unengaging. No one was interested in talking about the same things. I was actually trying to talk about rationality and effective altruism for years before I found this place, and was referred into it because of that!

My life was tedious and very lonely. I never want to go back to that again. Being outside this network felt like being dead inside my own skin.

Why Forth could not effectively change the way she interacted with men in order to decrease the sexual interest she received from them, I do not know–it is perhaps unknowable–but I think her life would not have ended had she been married.

A couple of years ago, I met someone who initiated a form of attraction I’d never experienced before. I was upset because of a sex offender and wanted to be protected. For months, I desperately wanted this person to protect me. My mind screamed for it every day. My survival instincts told me I needed to be in their territory. This went on for months. I fantasized about throwing myself at them, and even obeying them, because they protected me in the fantasy.

That is very strange for me because I had never felt that way about anyone. Obedience? How? That seemed so senseless.

Look, no one is smart in all ways at once. We all have our blind spots. Forth’s blind spot was this thing called “marriage.” It is perhaps also a blind spot for most of the people around her–especially this one. She should not be condemned for not being perfect, any more than the rest of us.

But we can still conclude that she was desperately lonely for normal things that normal people seek–friendship, love, marriage–and her difficulties hailed in part from the fact that her environment was 90% male. She had no group of like-minded females to bond with and seek advice and feedback from.

Forth’s death prompted me to create The Female Side, an open thread for any female readers of this blog, along with a Slack-based discussion group. (The invite is in the comments over on the Female Side.) You don’t have to be alone. (You don’t even have to be good at math.) We are rare, but we are out here.

(Note: anyone can feel free to treat any thread as an Open Thread, and some folks prefer to post over on the About page.)

Given all of this, why don’t I embrace efforts to get more women into STEM? Why do I find these efforts repulsive, and accept the heavily male-dominated landscape? Wouldn’t it be in my self-interest to attract more women to STEM and convince people, generally, that women are talented at such endeavors?

I would love it if more women were genuinely interested in STEM. I am also grateful to pioneers like Marie Curie and Lise Meitner, whose brilliance and dedication forced open the doors of academies that had formerly been entirely closed to women.

The difficulty is that genuine interest in STEM is rare, and even rarer in women. The over-representation of men at both the high and low ends of mathematical abilities is most likely due to biological causes that even a perfect society that removes all gender-based discrimination and biases cannot eliminate.

It does not benefit me one bit if STEM gets flooded with women who are not nerds. That is just normies invading and taking over my territory. It’s middle school all over again.

If your idea of “getting girls interested in STEM” includes makeup kits and spa masks, I posit that you have no idea what you’re talking about, you’re appropriating my culture, and you can fuck off.

Please take a moment to appreciate just how terrible this “Project Mc2” “Lip Balm Lab” is. I am not sure I have words sufficient to describe how much I hate this thing and its entire line, but let me try to summarize:

There’s nothing inherently wrong with lib balm. The invention of makeup that isn’t full of lead and toxic chemicals was a real boon to women. There are, in fact, scientists at work at makeup companies, devoted to inventing new shades of eye shadow, quicker-drying nail polish, less toxic lipstick, etc.

And… wearing makeup is incredibly normative for women. Little girls play at wearing makeup. Obtaining your first adult makeup and learning how to apply it is practically a rite of passage for young teens. Most adult women love makeup and wear it every day.

Except:

Nerd women.

Female nerds just aren’t into makeup.

Marie Curie
Marie Curie, fashionista

I’m not saying they never wear makeup–there’s even a significant subculture of people who enjoy cosplay/historical re-enactment and construct elaborate costumes, including makeup–but most of us don’t. Much like male nerds, we prioritize comfort and functionality in the things covering our bodies, not fashion trends.

And if anything, makeup is one of the most obvious shibboleths that distinguishes between nerd females and normies.

In other words, they took the tribal marker of the people who made fun of us throughout elementary and highschool and repackaged it as “Science!” in an effort to get more normies into STEM, and I’m supposed to be happy about this?!

I am not ashamed of the fact that women are rarer than men at the highest levels of math abilities. Women are also rarer than men at the lowest levels of math abilities. I feel no need to cram people into disciplines they aren’t actually interested in just so we can have equal numbers of people in each–we don’t need equal numbers of men and women in construction work, plumbing, electrical engineering, long-haul trucking, nursing, teaching, childcare, etc.

It’s okay for men and women to enjoy different things–on average–and it’s also okay for some people to have unusual talents or interests.

It’s okay to be you.

(I mean, unless you’re a murderer or something. Then don’t be you.)

Neuropolitics: “Openness” and Cortical Thickness

Brain anatomy–gyruses

I ran across an interesting study today, on openness, creativity, and cortical thickness.

The psychological trait of “openness”–that is, willingness to try new things or experiences–correlates with other traits like creativity and political liberalism. (This might be changing as cultural shifts are changing what people mean by “liberalism,” but it was true a decade ago and is still statistically true today.)

Researchers took a set of 185 intelligent people studying or employed in STEM, gave them personality tests intended to measure “openness,” and then scanned their brains to measure cortical thickness in various areas.

According to Citizendium, “Cortical thickness” is:

a brain morphometric measure used to describe the combined thickness of the layers of the cerebral cortex in mammalian brains, either in local terms or as a global average for the entire brain. Given that cortical thickness roughly correlates with the number of neurons within an ontogenetic column, it is often taken as indicative of the cognitive abilities of an individual, albeit the latter are known to have multiple determinants.

According to the article in PsyPost, reporting on the study:

“The key finding from our study was that there was a negative correlation between Openness and cortical thickness in regions of the brain that underlie memory and cognitive control. This is an interesting finding because typically reduced cortical thickness is associated with decreased cognitive function, including lower psychometric measures of intelligence,” Vartanian told PsyPost.”

Citizendium explains some of the issues associated with too thin or thick cortexs:

Typical values in adult humans are between 1.5 and 3 mm, and during aging, a decrease (also known as cortical thinning) on the order of about 10 μm per year can be observed [3]. Deviations from these patterns can be used as diagnostic indicators for brain disorders: While Alzheimer’s disease, even very early on, is characterized by pronounced cortical thinning[4], Williams syndrome patients exhibit an increase in cortical thickness of about 5-10% in some regions [5], and lissencephalic patients show drastic thickening, up to several centimetres in occipital regions[6].

Obviously people with Alzheimer’s have difficulty remembering things, but people with Williams Syndrome also tend to be low-IQ and have difficulty with memory.

Of course, the cortex is a big region, and it may matter specifically where yours is thin or thick. In this study, the thinness was found in the left middle frontal gyrus, left middle temporal gyrus, left superior temporal gyrus, left inferior parietal lobule, right inferior parietal lobule, and right middle temporal gyrus.

These are areas that, according to the study’s authors, have previously been shown to be activated during neuroimaging studies of creativity, and so the specific places you would expect to see some kind of anatomical difference in particularly creative people.

Hypothetically, maybe reduced cortical thickness, in some people, makes them worse at remembering specific kinds of experiences–and thus more likely to try new ones. For example, if I remember very strongly that I like Tomato Sauce A, and that I hate Tomato Sauce B, I’m likely to just keep buying A. But if every time I go to the store I only have a vague memory that there was a tomato sauce I really liked, I might just pick sauces at random–eventually trying all of them.

The authors have a different interpretation:

“We believe that the reason why Openness is associated with reduced cortical thickness is that this condition reduces the person’s ability to filter the contents of thought, thereby facilitating greater immersion in the sensory, cognitive, and emotional information that might otherwise have been filtered out of consciousness.”

So, less meta-brain, more direct experience? Less worrying, more experiencing?

The authors note a few problems with the study (for starters, it is hardly a representative sample of either “creative” people nor exceptional geniuses, being limited to people in STEM,) but it is still an interesting piece of data and I hope to see more like it.

 

If you want to read more about brains, I recommend Kurzweil’s How to Create a Mind, which I am reading now. It goes into some detail on relevant brain structures, and how they work to create memories, recognize patterns, and let us create thought. (Incidentally, the link goes to Amazon Smile, which raises money for charity; I selected St. Jude’s.)

Maybe America is too Dumb for Democracy: A Review of Nichols’s The Death of Expertise

For today’s Cathedral Round-Up, I finally kept my commitment to review Tom Nichols’s The Death of Expertise: The Campaign against Established Knowledge and Why it Matters. It was better than I expected, (though that isn’t saying much.)

Make no mistake: Nichols is annoyingly arrogant. He draws a rather stark line between “experts” (who know things) and everyone else (who should humbly limit themselves to voting between options defined for them by the experts.) He implores people to better educate themselves in order to be better voters, but has little patience for autodidacts and bloggers like myself who are actually trying.

But arrogance alone doesn’t make someone wrong.

Nichols’s first thesis is simple: most people are too stupid or ignorant to second-guess experts or even contribute meaningfully to modern policy discussions. How can people who can’t find Ukraine on a map or think we should bomb the fictional city of Agrabah contribute in any meaningful way to a discussion of international policy?

It was one thing, in 1776, to think the average American could vote meaningfully on the issues of the day–a right they took by force, by shooting anyone who told them they couldn’t. Life was less complicated in 1776, and the average person could master most of the skills they needed to survive (indeed, pioneers on the edge of the frontier had to be mostly self-sufficient in order to survive.) Life was hard–most people engaged in long hours of heavy labor plowing fields, chopping wood, harvesting crops, and hauling necessities–but could be mastered by people who hadn’t graduated from elementary school.

But the modern industrial (or post-industrial) world is much more complicated than the one our ancestors grew up in. Today we have cars (maybe even self-driving cars), electrical grids and sewer systems, atomic bombs and fast food. The speed of communication and transportation have made it possible to chat with people on the other side of the earth and show up on their doorstep a day later. The amount if specialized, technical knowledge necessary to keep modern society running would astonish the average caveman–even with 15+ years of schooling, the average person can no longer build a house, nor even produce basic necessities like clothes or food. Most of us can’t even make a pencil.

Even experts who are actually knowledgeable about their particular area may be completely ignorant of fields outside of their expertise. Nichols speaks Russian, which makes him an expert in certain Russian-related matters, but he probably knows nothing about optimal high-speed rail networks. And herein lies the problem:

The American attachment to intellectual self-reliance described by Tocqueville survived for nearly a century before falling under a series of assaults from both within and without. Technology, universal secondary education, the proliferation of specialized expertise, and the emergence of the United States a a global power in the mid-twentieth century all undermined the idea… that the average American was adequately equipped either for the challenges of daily life or for running the affairs of a large country.

… the political scientist Richard Hofstadter wrote that “the complexity of modern life has steadily whittled away the functions the ordinary citizen can intelligently and competently perform for himself.”

… Somin wrote in 2015 that the “size and complexity of government” have mad it “more difficult for voters with limited knowledge to monitor and evaluate the government’s many activities. The result is a polity in which the people often cannot exercise their sovereignty responsibly and effectively.”

In other words, society is now too complex and people too stupid for democracy.

Nichols’s second thesis is that people used to trust experts, which let democracy function, but to day they are less trusting. He offers no evidence other than his general conviction that this change has happened.

He does, however, detail the way he thinks that 1. People have been given inflated egos about their own intelligence, and 2. How our information-delivery system has degenerated into misinformational goo, resulting in the trust-problems he believes we are having These are interesting arguments and worth examining.

A bit of summary:

Indeed, maybe the death of expertise is a sign of progress. Educated professionals, after all, no longer have a stranglehold on knowledge. The secrets of life are no longer hidden in giant marble mausoleums… in the past, there was less tress between experts and laypeople, but only because citizen were simply unable to challenge experts in any substantive way. …

Participation in political, intellectual, and scientific life until the early twentieth century was far more circumscribed, with debates about science, philosophy, and public policy all conducted by a small circle of educated males with pen and ink. Those were not exactly the Good Old Days, and they weren’t that long ago. The time when most people didn’t finish highschool, when very few went to college, and only a tiny fraction of the population entered professions is still within living memory of many Americans.

Aside from Nichols’s insistence that he believes modern American notions about gender and racial equality, I get the impression that he wouldn’t mind the Good Old Days of genteel pen-and-ink discussions between intellectuals. However, I question his claim that participation in political life was far more circumscribed–after all, people voted, and politicians liked getting people to vote for them. People anywhere, even illiterate peasants on the frontier or up in the mountains like to gather and debate about God, politics, and the meaning of life. The question is less “Did they discuss it?” and more “Did their discussions have any effect on politics?” Certainly we can point to abolition, women’s suffrage, prohibition, and the Revolution itself as heavily grass-roots movements.

But continuing with Nichols’s argument:

Social changes only in the past half century finally broke down old barriers of race, class, and sex not only between Americans and general but also between uneducated citizens and elite expert in particular. A wide circle of debate meant more knowledge but more social friction. Universal education, the greater empowerment of women and minorities, the growth of a middle class, and increased social mobility all threw a minority of expert and the majority of citizens into direct contact, after nearly two centuries in which they rarely had to interact with each other.

And yet the result has not been a greater respect for knowledge, but the growth of an irrational conviction among Americans that everyone is as smart as everyone else.

Nichols is distracting himself with the reflexive racial argument; the important change he is highlighting isn’t social but technical.

I’d like to quote a short exchange from Our Southern Highlanders, an anthropologic-style text written about Appalachia about a century ago:

The mountain clergy, as a general rule, are hostile to “book larnin’,” for “there ain’t no Holy Ghost in it.” One of them who had spent three months at a theological school told President Frost, “Yes, the seminary is a good place ter go and git rested up, but ’tain’t worth while fer me ter go thar no more ’s long as I’ve got good wind.”

It used to amuse me to explain how I knew that the earth was a sphere; but one day, when I was busy, a tiresome old preacher put the everlasting question to me: “Do you believe the earth is round?” An impish perversity seized me and I answered, “No—all blamed humbug!” “Amen!” cried my delighted catechist, “I knowed in reason you had more sense.”

But back to Nichols, who really likes the concept of expertise:

One reason claims of expertise grate on people in a democracy is that specialization is necessarily exclusive. WHen we study a certain area of knowledge or spend oulives in a particular occupation, we not only forego expertise in othe jobs or subjects, but also trust that other pople in the community know what they’re doing in thei area as surely as we do in our own. As much as we might want to go up to the cockpit afte the engine flames out to give the pilots osme helpful tips, we assume–in part, ebcause wehave to–that tye’re better able to cope with the problem than we are. Othewise, our highly evovled society breaks down int island sof incoherence, where we spend our time in poorly infomed second-guessing instead of trusting each other.

This would be a good point to look at data on overall trust levels, friendship, civic engagement, etc (It’s down. It’s all down.) and maybe some explanations for these changes.

Nichols talks briefly about the accreditation and verification process for producing “experts,” which he rather likes. There is an interesting discussion in the economics literature on things like the economics of trust and information (how do websites signal that they are trustworthy enough that you will give them your credit card number and expect to receive items you ordered a few days later?) which could apply here, too.

Nichols then explores a variety of cognitive biases, such a superstitions, phobias, and conspiracy theories:

Conspiracy theories are also a way for people to give meaning to events that frighten them. Without a coherent explanation for why terrible thing happen to innocent people, they would have to accept such occurence as nothing more than the random cruelty either of an uncaring universe or an incomprehensible deity. …

The only way out of this dilemma is to imagine a world in which our troubles are the fault of powerful people who had it within their power to avert such misery. …

Just as individual facing grief and confusion look for reasons where none may exist, so, too, will entire societies gravitate toward outlandish theories when collectively subjected to a terrible national experience. Conspiracy theories and flawed reasoning behind them …become especially seductive “in any society that has suffered an epic, collectively felt trauma. In the aftermath, millions of people find themselves casting about for an answer to the ancient question of why bad things happen to good people.” …

Today, conspiracy theories are reaction mostly to the economic and social dislocations of globalization…This is not a trivial obstacle when it comes to the problems of expert engagement with the public: nearly 30 percent of Americans, for example, think “a secretive elite with a globalist agenda is conspiring to eventually rule the world” …

Obviously stupid. A not-secret elite with a globalist agenda already rules the world.

and 15 percent think media or government add secret mind controlling technology to TV broadcasts. (Another 15 percent aren’t sure about the TV issue.)

It’s called “advertising” and it wants you to buy a Ford.

Anyway, the problem with conspiracy theories is they are unfalsifiable; no amount of evidence will ever convince a conspiracy theorist that he is wrong, for all evidence is just further proof of how nefariously “they” are constructing the conspiracy.

Then Nichols gets into some interesting matter on the difference between stereotypes and generalizations, which segues nicely into a tangent I’d like to discuss, but it probably deserves its own post. To summarize:

Sometimes experts know things that contradict other people’s political (or religious) beliefs… If an “expert” finding or field accords with established liberal values, EG, the implicit association test found that “everyone is a little bit racist,” which liberals already believed, then there is an easy mesh between what the academics believe and the rest of their social class.

If their findings contradict conservative/low-class values, EG, when professors assert that evolution is true and “those low-class Bible-thumpers in Oklahoma are wrong,” sure, they might have a lot of people who disagree with them, but those people aren’t part of their own social class/the upper class, and so not a problem. If anything, high class folks love such finding, because it gives them a chance to talk about how much better they are than those low-class people (though such class conflict is obviously poisonous in a democracy where those low-class people can still vote to Fuck You and Your Global Warming, Too.)

But if the findings contradict high-class/liberal politics, then the experts have a real problem. EG, if that same evolution professor turns around and says, “By the way, race is definitely biologically real, and there are statistical differences in average IQ between the races,” now he’s contradicting the political values of his own class/the upper class, and that becomes a social issue and he is likely to get Watsoned.

For years folks at Fox News (and talk radio) have lambasted “the media” even though they are part of the media; SSC recently discussed “can something be both popular and silenced?

Jordan Peterson isn’t unpopular or “silenced” so much as he is disliked by upper class folks and liked by “losers” and low class folks, despite the fact that he is basically an intellectual guy and isn’t peddling a low-class product. Likewise, Fox News is just as much part of The Media as NPR, (if anything, it’s much more of the Media) but NPR is higher class than Fox, and Fox doesn’t like feeling like its opinions are being judged along this class axis.

For better or for worse (mostly worse) class politics and political/religious beliefs strongly affect our opinions of “experts,” especially those who say things we disagree with.

But back to Nichols: Dunning-Kruger effect, fake cultural literacy, and too many people at college. Nichols is a professor and has seen college students up close and personal, and has a low opinion of most of them. The massive expansion of upper education has not resulted in a better-educated, smarter populace, he argues, but a populace armed with expensive certificates that show the sat around a college for 4 years without learning much of anything. Unfortunately, beyond a certain level, there isn’t a lot that more school can do to increase people’s basic aptitudes.

Colleges get money by attracting students, which incentivises them to hand out degrees like candy–in other words, students are being lied to about their abilities and college degrees are fast becoming the participation trophies for the not very bright.

Nichols has little sympathy for modern students:

Today, by contrast, students explode over imagined slights that are not even remotely int eh same category as fighting for civil rights or being sent to war. Students now build majestic Everests from the smallest molehills, and they descend into hysteria over pranks and hoaxes. In the midst of it all, the students are learning that emotions and volume can always defeat reason and substance, thus building about themselves fortresses that no future teacher, expert, or intellectual will ever be able to breach.

At Yale in 2015, for example, a house master’s wife had the temerity to tell minority students to ignore Halloween costumes they thought offensive. This provoked a campus wide temper tantrum that included professors being shouted down by screaming student. “In your position as master,” one student howled in a professor’s face, “it is your job to create a place of comfort and home for the students… Do you understand that?!”

Quietly, the professor said, “No, I don’t agree with that,” and the student unloaded on him:

“Then why the [expletive] did you accept the position?! Who the [expletive] hired you?! You should step down! If that is what you think about being a master you should step down! It is not about creating an intellectual space! It is not! Do you understand that? It’s about creating a home here. You are not doing that!” [emphasis added]

Yale, instead of disciplining students in violation of their own norms of academic discourse, apologized to the tantrum throwers. The house master eventually resigned from his residential post…

To faculty everywhere, the lesson was obvious: the campus of a top university is not a place for intellectual exploration. It is a luxury home, rented for four to six years, nine months at a time, by children of the elite who may shout at faculty as if they’re berating clumsy maids in a colonial mansion.

The incident Nichols cites (and similar ones elsewhere,) are not just matters of college students being dumb or entitled, but explicitly racial conflicts. The demand for “safe spaces” is easy to ridicule on the grounds that students are emotional babies, but this misses the point: students are carving out territory for themselves on explicitly racial lines, often by violence.

Nichols, though, either does not notice the racial aspect of modern campus conflicts or does not want to admit publicly to doing so.

Nichols moves on to blame TV, especially CNN, talk radio, and the internet for dumbing down the quality of discourse by overwhelming us with a deluge of more information than we can possibly process.

Referring back to Auerswald and The Code Economy, if automation creates a bifurcation in industries, replacing a moderately-priced, moderately available product with a stream of cheap, low-quality product on the one hand and a trickle of expensive, high-quality products on the other, good-quality journalism has been replaced with a flood of low-quality crap. The high-quality end is still working itself out.

Nichols opines:

Accessing the Internet can actually make people dumber than if they had never engaged a subject at all. The very act of searching for information makes people think they’ve learned something,when in fact they’re more likely to be immersed in yet more data they do not understand. …

When a group of experimental psychologists at Yale investigated how people use the internet, they found that “people who search for information on the Web emerge from the process with an inflated sense of how much they know–even regarding topic that are unrelated to the ones they Googled.” …

How can exposure to so much information fail to produce at least some kind of increased baseline of knowledge, if only by electronic osmosis? How can people read so much yet retain so little? The answer is simple: few people are actually reading what they find.

As a University College of London (UCL) study found, people don’t actually read the articles they encounter during a search on the Internet. Instead, they glance at the top line or the first few sentences and then move on. Internet users, the researchers noted, “Are not reading online in the traditional sense; indeed, there are signs that new forms of ‘reading’ are emerging as users ‘power browse’ horizontally through titles, contents pages and abstracts going for quick wins. It almost seems that they go online to avoid reading in the traditional sense.”

The internet’s demands for instant updates, for whatever headlines generate the most clicks (and thus advertising revenue), has upset the balance of speed vs. expertise in the newsroom. No longer have reporters any incentive to spend long hours carefully writing a well-researched story when such stories pay less than clickbait headlines about racist pet costumes and celebrity tweets.

I realize it seems churlish to complain about the feast of news and information brought to us by the Information Age, but I’m going to complain anyway. Changes in journalism, like the increased access to the Internet and to college education, have unexpectedly corrosive effects on the relationship between laypeople and experts. Instead of making people better informed, much of what passes for news in the twenty-first century often leaves laypeople–and sometimes experts–even more confused and ornery.

Experts face a vexing challenge: there’s more news available, and yet people seem less informed, a trend that goes back at least a quarter century. Paradoxically, it is a problem that is worsening rather than dissipating. …

As long ago as 1990, for example, a study conducted by the Pew Trust warned that disengagement from important public questions was actually worse among people under thirty, the group that should have been most receptive to then-emerging sources of information like cable television and electronic media. This was a distinct change in American civic culture, as the Pew study noted:

“Over most of the past five decades younger members of the public have been at least as well informed as older people. In 1990, that is no longer the case. … “

Those respondents are now themselves middle-aged, and their children are faring no better.

If you were 30 in 1990, you were born in 1960, to parents who were between the ages of 20 and 40 years old, that is, born between 1920 and 1940.

Source: Audacious Epigone

Fertility for the 1920-1940 cohort was strongly dysgenic. So was the 1940-50 cohort. The 1900-1919 cohort at least had the Flynn Effect on their side, but later cohorts just look like an advertisement for idiocracy.

Nichols ends with a plea that voters respect experts (and that experts, in turn, be humble and polite to voters.) After all, modern society is too complicated for any of us to be experts on everything. If we don’t pay attention to expert advice, he warns, modern society is bound to end in ignorant goo.

The logical inconsistency is that Nichols believes in democracy at all–he thinks democracy can be saved if ignorant people vote within a range of options as defined by experts like himself, eg, “What vaccine options are best?” rather than “Should we have vaccines at all?”

The problem, then, is that whoever controls the experts (or controls which expert opinions people hear) controls the limits of policy debates. This leads to people arguing over experts, which leads right back where we are today. As long as there are politics, “expertise” will be politicized, eg:

Look at any court case in which both sides bring in their own “expert” witnesses. Both experts testify to the effect that their side is correct. Then the jury is left to vote on which side had more believable experts. This is like best case scenario voting, and the fact that the voters are dumb and don’t understand what the experts are saying and are obviously being mislead in many cases is still a huge problem.

If politics is the problem, then perhaps getting rid of politics is the solution. Just have a bunch of Singapores run by Lee Kwan Yews, let folks like Nichols advise them, and let the common people “vote with their feet” by moving to the best states.

The problem with this solution is that “exit” doesn’t exist in the modern world in any meaningful way, and there are significant reasons why ordinary people oppose open borders.

Conclusion: 3/5 stars. It’s not a terrible book, and Nichols has plenty of good points, but “Americans are dumb” isn’t exactly fresh territory and much has already been written on the subject.

Book Club: The Code Economy ch. 1

Greetings! Grab a cup of coffee and pull up a chair. Tea is also good. Today we’re diving into chapter one of Philip Auerswald’s The Code Economy, “Jobs: Divide and Coordinate.”

I wish this chapter had been much longer; we speed through almost 2.5 million years of cognitive evolution in a couple of pages.

The earliest hominins had about the same short-term memory as a modern-day chimpanzee, which is to say they could keep track of only two operations at a time. … Our methods for creating tools gradually became more sophisticated, until we were using the tools we created to produce other tools in a repetitive and predictable manner. These processes for creating stone tools were among humanity’s first production algorithms-that is, the earliest code. They appeared almost simultaneously in human communities in most part of the world around 40,000 BC.

Footnote:

…[E.O.] Wilson refers to this phenomenon more broadly as the discovery of eusocial behavior… Wilson situates the date far earlier in human history than I do here. I chose 50,000 years [ago] because my focus is on the economy. it is clear that an epochal change in society occurred roughly 10,000 years BCE, when humans invented agriculture in six parts of the world simultaneously. The fact of this simultaneity directly suggests the advance of code represented by the invention of agriculture was part of a forward movement of code that started much earlier.

What do you think? Does the simultaneous advent of behavioral modernity–or eusociality–in far-flung human groups roughly 50,000 years ago, followed by the simultaneous advent of agriculture in several far-flung groups about 10,000 years ago speak to the existence of some universal, underlying process? Why did so many different groups of people develop similar patterns of life and technology around the same time, despite some of them being highly isolated? Was society simply inevitable?

The caption on the photo is similarly interesting:

Demand on Short-Term Working Memory in the Production of an Obsidian Axe [from Read and van der Leeuw, 2015] … We can relate the concepts invoked in the prodcution of stone tools to the number of dimensions involved and thereby to the size of short-term workign memory (STWM) required for the prodction of the kind of stone tools that exemplify each stage in hominin evolution. …

Just hitting the end of a pebble once to create one edge, as in the simplest tools, they calculate requires holding three items in the working memory. Removing several flakes to create a longer edge (a line), takes STWM 4; working an entire side takes STWM 5; and working both sides of the stone in preparation for knapping flakes from the third requires both an ability to think about the pebble’s shape in three dimensions and STWM 7.

(The Wikipedia article on Lithic Reduction has a lovely animation of the technique.)

It took about 2 million years to proceed from the simplest tools (working memory: 3) to the most complex (working memory: 7.) Since the Neolithic, our working memory hasn’t improved–most of us are still limited to a mere 7 items in our working memory, just enough to remember a phone number if you already know the area code.

All of our advances since the Neolithic, Auerswald argues, haven’t been due to an increase in STWM, but our ability to build complexity externally: through code. And it was this invention of code that really made society take off.

By about 10,000 BCE, humans had formed the first villages… Villages were the precursors of modern-day business firms in that they were durable association built around routines. … the advance of code at the village level through the creation of new technological combinations set into motion the evolution from simplicity to complexity that has resulted in the modern economy.

It was in the village, then, that code began to evolve.

What do you think? Are Read and van der Leeuw just retroactively fitting numbers 3-7 to the tools, or do they really show an advance in working memory? Is the village really the source of most code evolution? And who do you think is more correct, Herbert Spencer or Thomas Malthus?

Auerswald then forward to 1557, with the first use of the word “job” (spelled “jobbe,” most likely from “gobbe,” or lump.)

The advent of the “jobbe” a a lump of work was to the evolution of modern society something like what the first single-celled organism was to the evolution of life.

!

The “jobbe” contrasted with the obligation to perform labor continuously and without clearly defined roles–slavery, serfdom, indentured servitude, or even apprenticeship–as had been the norm throughout human history.

Did the Black Death help create the modern “job market” by inspiring Parliament to pass the Statute of Laborers?

I am reminded here of a passage from Gulick’s Evolution of the Japanese, Social and Psychic, (published in 1903):

The idea of making a bargain when two persons entered upon some particular piece of work, the one as employer, the other as employed, was entirely repugnant to the older generation, since it was assumed that their relations as inferior and superior should determine their financial relations; the superior would do what was right, and the inferior should accept what the superior might give without a question or a murmur. Among the samurai, where the arrangement is between equals, bargaining or making fixed and fast terms which will hold to the end, and which may be carried to the courts in case of differences, was a thing practically unknown in the older civilization. Everything of a business nature was left to honor, and was carried on in mutual confidence.

“A few illustrations of this spirit of confidence from my own experience may not be without interest. On first coming to Japan, I found it usual for a Japanese who wished to take a jinrikisha to call the runner and take the ride without making any bargain, giving him at the end what seemed right. And the men generally accepted the payment without question. I have found that recently, unless there is some definite understanding arrived at before the ride, there is apt to be some disagreement, the runner presuming on the hold he has, by virtue of work done, to get more than is customary. This is especially true in case the rider is a foreigner. Another set of examples in which astonishing simplicity and confidence were manifested was in the employment of evangelists. I have known several instances in which a full correspondence with an evangelist with regard to his employment was carried on, and the settlement finally concluded, and the man set to work without a word said about money matters. It need hardly be said that no foreigner took part in that correspondence. …

“This confidence and trustfulness were the product of a civilization resting on communalistic feudalism; the people were kept as children in dependence on their feudal lord; they had to accept what he said and did; they were accustomed to that order of things from the beginning and had no other thought; on the whole too, without doubt, they received regular and kindly treatment. Furthermore, there was no redress for the peasant in case of harshness; it was always the wise policy, therefore, for him to accept whatever was given without even the appearance of dissatisfaction. This spirit was connected with the dominance of the military class. Simple trustfulness was, therefore, chiefly that of the non-military classes.

“Since the overthrow of communal feudalism and the establishment of an individualistic social order, necessitating personal ownership of property, and the universal use of money, trustful confidence is rapidly passing away.

We still identify ourselves with our profession–“I am a doctor” or “I am a paleontologist”–but much less so than in the days when “Smith” wasn’t a name.

Auerswald progresses to the modern day:

In the past two hundred years, the complexity of human economic organization has  increased by orders of magnitude. Death rates began to fall rapidly in the middle of the nineteenth century, due to a combination of increased agricultural output, improved hygiene, and the beginning of better medical practices–all different dimensions of the advance of code…. Greater numbers of people living in greater density than ever before accelerated the advance of code.

Sounds great, but:

By the twentieth century, the continued advance of code necessitated the creation of government bureaucracies and large corporations that employed vast numbers of people. These organizations executed code of sufficient complexity that it was beyond the capacity of any single individual to master.

I’ve often wondered if the explosion of communist disasters at the beginning of the 20th century occurred because we could imagine a kind of nation-wide code for production and consumption and we had the power to implement it, but we didn’t actually have the capabilities and tools necessary to make it work.

We can imagine Utopia, but we cannot reach it.

Auerswald delineates two broad categories of “epochal change” as a result of the code-explosion of the past two centuries: First, our capabilities grew. Second:

“we have, to an increasing degree, ceded to other people–and to code itself–authority and autonomy, which for millennia we had kept unto ourselves and our immediate tribal groups as uncodified cultural norms.”

Before the “job”, before even the “trade,” people lived and worked far more at their own discretion. Hoeing fields or gathering yams might be long and tedious work, but at least you didn’t have to pee in a bottle because Amazon didn’t give you time for bathroom breaks.

Every time voters demand that politicians “bring back the jobs” or politicians promise to create them, we are implicitly stating that the vast majority of people are no longer capable of making their own jobs. (At least, not jobs that afford a modern lifestyle.) The Appalachians lived in utter poverty (the vast majority of people before 1900 lived in what we would now call utter poverty), but they did not depend on anyone else to create “jobs” for them; they cleared their own land, planted their own corn, hunted their own hogs, and provided for their own needs.

Today’s humans are (probably not less intelligent nor innately capable than the average Appalachian of 1900, but the economy (and our standards of living) are much more complex. The average person no longer has the capacity to drive job growth in such a complicated system, but the solution isn’t necessarily for everyone to become smarter. After all, large, complicated organizations need hundreds of employees who are not out founding their own companies.

But this, in turn, means all of those employees–and even the companies themselves–are dependent on forces far outside their control, like Chinese monetary policy or the American electoral cycle. And this, in turn, raises demand for some kind of centralized, planned system to protect the workers from economic hardship and ensure that everyone enjoys a minimum standard of living.

Microstates suggest themselves as a way to speed the evolution of economic code by increasing the total number of organisms in the ecosystem.

With eusociality, man already became a political (that is, polis) animal around 10,000 or 40,000 or perhaps 100,000 years ago, largely unable to subsist on his own, absent the tribe. We do not seem to regret this ill-remembered transition very much, but what about the current one? Is the job-man somehow less human, less complete than the tradesman? Do we feel that something essential to the human spirit has been lost in defining and routinizing our daily tasks down to the minute, forcing men to bend to the timetables of factories and international corporations? Or have we, through the benefits of civilization (mostly health improvements) gained something far richer?

When did language evolve?

The smartest non-human primates, like Kanzi the bonobo and Koko the gorilla, understand about 2,000 to 4,000 words. Koko can make about 1,000 signs in sign language and Kanzi can use about 450 lexigrams (pictures that stand for words.) Koko can also make some onomatopoetic words–that is, she can make and use imitative sounds in conversation.

A four year human knows about 4,000 words, similar to an exceptional gorilla. An adult knows about 20,000-35,000 words. (Another study puts the upper bound at 42,000.)

Somewhere along our journey from ape-like hominins to homo sapiens sapiens, our ancestors began talking, but exactly when remains a mystery. The origins of writing have been amusingly easy to discover, because early writers were fond of very durable surfaces, like clay, stone, and bone. Speech, by contrast, evaporates as soon as it is heard–leaving no trace for archaeologists to uncover.

But we can find the things necessary for speech and the things for which speech, in turn, is necessary.

The main reason why chimps and gorillas, even those taught human language, must rely on lexigrams or gestures to communicate is that their voiceboxes, lungs, and throats work differently than ours. Their semi-arborial lifestyle requires using the ribs as a rigid base for the arm and shoulder muscles while climbing, which in turn requires closing the lungs while climbing to provide support for the ribs.

Full bipedalism released our early ancestors from the constraints on airway design imposed by climbing, freeing us to make a wider variety of vocalizations.

Now is the perfect time to break out my file of relevant human evolution illustrations:

Source: Scientific American What Makes Humans Special

We humans split from our nearest living ape relatives about 7-8 million years ago, but true bipedalism may not have evolved for a few more million years. Since there are many different named hominins, here is a quick guide:

Source: Macroevolution in and Around the Hominin Clade

Australopithecines (light blue in the graph,) such as the famous Lucy, are believed to have been the first fully bipedal hominins, although, based on the shape of their toes, they may have still occasionally retreated into the trees. They lived between 4 and 2 million years ago.

Without delving into the myriad classification debates along the lines of “should we count this set of skulls as a separate species or are they all part of the natural variation within one species,” by the time the homo genus arises with H Habilis or H. Rudolfensis around 2.8 million years ag, humans were much worse at climbing trees.

Interestingly, one direction humans have continued evolving in is up.

Oldowan tool

The reliable production of stone tools represents an enormous leap forward in human cognition. The first known stone tools–Oldowan–are about 2.5-2.6 million years old and were probably made by homo Habilis. These simple tools are typically shaped only one one side.

By the Acheulean–1.75 million-100,000 years ago–tool making had become much more sophisticated. Not only did knappers shape both sides of both the tops and bottoms of stones, but they also made tools by first shaping a core stone and then flaking derivative pieces from it.

The first Acheulean tools were fashioned by h Erectus; by 100,000 years ago, h Sapiens had presumably taken over the technology.

Flint knapping is surprisingly difficult, as many an archaeology student has discovered.

These technological advances were accompanied by steadily increasing brain sizes.

I propose that the complexities of the Acheulean tool complex required some form of language to facilitate learning and teaching; this gives us a potential lower bound on language around 1.75 million years ago. Bipedalism gives us an upper bound around 4 million years ago, before which our voice boxes were likely more restricted in the sounds they could make.

A Different View

Even though “homo Sapiens” has been around for about 300,000 years (or so we have defined the point where we chose to differentiate between our species and the previous one,) “behavioral modernity” only emerged around 50,000 years ago (very awkward timing if you know anything about human dispersal.)

Everything about behavioral modernity is heavily contested (including when it began,) but no matter how and when you date it, compared to the million years or so it took humans to figure out how to knap the back side of a rock, human technologic advance has accelerated significantly over the past 100,000 and even moreso over the past 50,000 and even 10,000.

Fire was another of humanity’s early technologies:

Claims for the earliest definitive evidence of control of fire by a member of Homo range from 1.7 to 0.2 million years ago (Mya).[1] Evidence for the controlled use of fire by Homo erectus, beginning some 600,000 years ago, has wide scholarly support.[2][3] Flint blades burned in fires roughly 300,000 years ago were found near fossils of early but not entirely modern Homo sapiens in Morocco.[4] Evidence of widespread control of fire by anatomically modern humans dates to approximately 125,000 years ago.[5]

What prompted this sudden acceleration? Noam Chomsky suggests that it was triggered by the evolution of our ability to use and understand language:

Noam Chomsky, a prominent proponent of discontinuity theory, argues that a single chance mutation occurred in one individual in the order of 100,000 years ago, installing the language faculty (a component of the mind–brain) in “perfect” or “near-perfect” form.[6]

(Pumpkin Person has more on Chomsky.)

More specifically, we might say that this single chance mutation created the capacity for figurative or symbolic language, as clearly apes already have the capacity for very simple language. It was this ability to convey abstract ideas, then, that allowed humans to begin expressing themselves in other abstract ways, like cave painting.

I disagree with this view on the grounds that human groups were already pretty widely dispersed by 100,000 years ago. For example, Pygmies and Bushmen are descended from groups of humans who had already split off from the rest of us by then, but they still have symbolic language, art, and everything else contained in the behavioral modernity toolkit. Of course, if a trait is particularly useful or otherwise successful, it can spread extremely quickly (think lactose tolerance,) and neither Bushmen nor Pygmies were 100% genetically isolated for the past 250,000 years, but I simply think the math here doesn’t work out.

However, that doesn’t mean Chomsky isn’t on to something. For example, Johanna Nichols (another linguist,) used statistical models of language differentiation to argue that modern languages split around 100,000 years ago.[31] This coincides neatly with the upper bound on the Out of Africa theory, suggesting that Nichols may actually have found the point when language began differentiating because humans left Africa, or perhaps she found the origin of the linguistic skills necessary to accomplish humanity’s cross-continental trek.

Philip Lieberman and Robert McCarthy looked at the shape of Neanderthal, homo Erectus, early h Sapiens and modern h Sapiens’ vocal tracts:

In normal adults these two portions of the SVT form a right angle to one another and are approximately equal in length—in a 1:1 proportion. Movements of the tongue within this space, at its midpoint, are capable of producing tenfold changes in the diameter of the SVT. These tongue maneuvers produce the abrupt diameter changes needed to produce the formant frequencies of the vowels found most frequently among the world’s languages—the “quantal” vowels [i], [u], and [a] of the words “see,” “do,” and “ma.” In contrast, the vocal tracts of other living primates are physiologically incapable of producing such vowels.

(Since juvenile humans are shaped differently than adults, they pronounce sounds slightly differently until their voiceboxes fully develop.)

Their results:

…Neanderthal necks were too short and their faces too long to have accommodated equally proportioned SVTs. Although we could not reconstruct the shape of the SVT in the Homo erectus fossil because it does not preserve any cervical vertebrae, it is clear that its face (and underlying horizontal SVT) would have been too long for a 1:1 SVT to fit into its head and neck. Likewise, in order to fit a 1:1 SVT into the reconstructed Neanderthal anatomy, the larynx would have had to be positioned in the Neanderthal’s thorax, behind the sternum and clavicles, much too low for effective swallowing. …

Surprisingly, our reconstruction of the 100,000-year-old specimen from Israel, which is anatomically modern in most respects, also would not have been able to accommodate a SVT with a 1:1 ratio, albeit for a different reason. … Again, like its Neanderthal relatives, this early modern human probably had an SVT with a horizontal dimension longer than its vertical one, translating into an inability to reproduce the full range of today’s human speech.

It was only in our reconstruction of the most recent fossil specimens—the modern humans postdating 50,000 years— that we identified an anatomy that could have accommodated a fully modern, equally proportioned vocal tract.

Just as small children who can’t yet pronounce the letter “r” can nevertheless make and understand language, I don’t think early humans needed to have all of the same sounds as we have in order to communicate with each other. They would have just used fewer sounds.

The change in our voiceboxes may not have triggered the evolution of language, but been triggered by language itself. As humans began transmitting more knowledge via language, humans who could make more sounds could utter a greater range of words perhaps had an edge over their peers–maybe they were seen as particularly clever, or perhaps they had an easier time organizing bands of hunters and warriors.

One of the interesting things about human language is that it is clearly simultaneously cultural–which language you speak is entirely determined by culture–and genetic–only humans can produce language in the way we do. Even the smartest chimps and dolphins cannot match our vocabularies, nor imitate our sounds. Human infants–unless they have some form of brain damage–learn language instinctually, without conscious teaching. (Insert reference to Steven Pinker.)

Some kind of genetic changes were obviously necessary to get from apes to human language use, but exactly what remains unclear.

A variety of genes are associated with language use, eg FOXP2. H Sapiens and chimps have different versions of the FOXP2 gene, (and Neanderthals have a third, but more similar to the H Sapiens version than the chimp,) but to my knowledge we have yet to discover exactly when the necessary mutations arose.

Despite their impressive skulls and survival in a harsh, novel climate, Neanderthals seem not to have engaged in much symbolic activity, (though to be fair, they were wiped out right about the time Sapiens really got going with its symbolic activity.) Homo Sapiens and Homo Nanderthalis split around 800-400,000 years ago–perhaps the difference in our language genes ultimately gave Sapiens the upper hand.

Just as farming appears to have emerged relatively independently in several different locations around the world at about the same time, so behavioral modernity seems to have taken off in several different groups around the same time. Of course we can’t rule out the possibility that these groups had some form of contact with each other–peaceful or otherwise–but it seems more likely to me that similar behaviors emerged in disparate groups around the same time because the cognitive precursors necessary for those behaviors had already begun before they split.

Based on genetics, the shape of their larynges, and their cultural toolkits, Neanderthals probably did not have modern speech, but they may have had something similar to it. This suggests that at the time of the Sapiens-Neanderthal split, our common ancestor possessed some primitive speech capacity.

By the time Sapiens and Neanderthals encountered each other again, nearly half a million years later, Sapiens’ language ability had advanced, possibly due to further modification of FOXP2 and other genes like it, plus our newly modified voiceboxes, while Neanderthals’ had lagged. Sapiens achieved behavioral modernity and took over the planet, while Neanderthals disappeared.

 

Anthropology Friday: Numbers and the Making of Us, by Caleb Everett, pt 3

Welcome back to our discussion of Numbers and the Making of Us: Counting and the Course of Human Cultures, by Caleb Everett.

The Pirahã are a small tribe (about 420) of Amazonian hunter-gatherers whose language is nearly unique: it has no numbers, and you can whistle it. Everett spent much of his childhood among the Piraha because his parents were missionaries, which probably makes him one of the world’s foremost non-Piraha experts on the Piraha.

Occasionally as a child I would wake up in the jungle to the cacophony of people sharing their dreams with one another–impromptu monologues followed by spurts of intense feedback. The people in question, a fascinating (to me anyhow) group known as the Piraha, are known to wake up and speak to their immediate neighbors at all hours of the night. … the voices suggested the people in the village were relaxed and completely unconcerned with my own preoccupations. …

The Piraha village my family lived in was reachable via a one-week sinuous trip along a series of Amazonian tributaries, or alternatively by a one-or flight in a Cessna single-engine airplane.

Piraha culture is, to say the least, very different from ours. Everett cites studies of Piraha counting ability in support of his idea that our ability to count past 3 is a culturally acquired process–that is, we can only count because we grew up in a numeric society where people taught us numbers, and the Piraha can’t count because they grew up in an anumeric society that not only lacks numbers, but lacks various other abstractions necessary for helping make sense of numbers. Our innate, genetic numerical abilities, (the ability to count to three and distinguish between small and large amounts,) he insists, are the same.

You see, the Piraha really can’t count. Line up 3 spools of thread and ask them to make an identical line, and they can do it. Line up 4 spools of thread, and they start getting the wrong number of spools. Line up 10 spools of thread, and it’s obvious that they’re just guessing and you’re wasting your time. Put five nuts in a can, then take two out and ask how many nuts are left: you get a response on the order of “some.”*

And this is not for lack of trying. The Piraha know other people have these things called “numbers.” They once asked Everett’s parents, the missionaries, to teach them numbers so they wouldn’t get cheated in trade deals. The missionaries tried for 8 months to teach them to count to ten and add small sums like 1 + 1. It didn’t work and the Piraha gave up.

Despite these difficulties, Everett insists that the Piraha are not dumb. After all, they survive in a very complex and demanding environment. He grew up with them; many of the are his personal friends and he regards them as mentally normal people with the exact same genetic abilities as everyone else who just lack the culturally-acquired skill of counting.

After all, on a standard IQ scale, someone who cannot even count to 4 would be severely if not profoundly retarded, institutionalized and cared for by others. The Piraha obviously live independently, hunt, raise, and gather their own food, navigate through the rainforest, raise their own children, build houses, etc. They aren’t building aqueducts, but they are surviving perfectly well outside of an institution.

Everett neglects the possibility that the Piraha are otherwise normal people who are innately bad at math.

Normally, yes, different mental abilities correlate because they depend highly on things like “how fast is your brain overall” or “were you neglected as a child?” But people also vary in their mental abilities. I have a friend who is above average in reading and writing abilities, but is almost completely unable to do math. This is despite being raised in a completely numerate culture, going to school, etc.

This is a really obvious and life-impairing problem in a society like ours, where you have to use math to function; my friend has been marked since childhood as “not cognitively normal.” It would be a completely invisible non-problem in a society like the Piraha, who use no math at all; in Piraha society, my friend would be “a totally normal guy” (or at least close.)

Everett states, explicitly, that not only are the Piraha only constrained by culture, but other people’s abilities are also directly determined by their cultures:

What is probably more remarkable about the relevant studies, though, is that they suggest that climbing any rungs of the arithmetic ladder requires numbers. How high we climb the ladder is not the result of our own inherent intelligence, but a result of the language we speak and of the culture we are born into. (page 136)

This is an absurd claim. Even my own children, raised in identically numerate environments and possessing, on the global scale, nearly identical genetics, vary in math abilities. You are probably not identical in abilities to your relatives, childhood classmates, next door neighbors, spouse, or office mates. We observe variations in mathematical abilities within cultures, families, cities, towns, schools, and virtually any group you chose that isn’t selected for math abilities. We can’t all do calculus just because we happen to live in a culture with calculus textbooks.

In fact, there is an extensive literature (which Everett ignores) on the genetics of intelligence:

Various studies have found the heritability of IQ to be between 0.7 and 0.8 in adults and 0.45 in childhood in the United States.[6][18][19] It may seem reasonable to expect that genetic influences on traits like IQ should become less important as one gains experiences with age. However, that the opposite occurs is well documented. Heritability measures in infancy are as low as 0.2, around 0.4 in middle childhood, and as high as 0.8 in adulthood.[7] One proposed explanation is that people with different genes tend to seek out different environments that reinforce the effects of those genes.[6] The brain undergoes morphological changes in development which suggests that age-related physical changes could also contribute to this effect.[20]

A 1994 article in Behavior Genetics based on a study of Swedish monozygotic and dizygotic twins found the heritability of the sample to be as high as 0.80 in general cognitive ability; however, it also varies by trait, with 0.60 for verbal tests, 0.50 for spatial and speed-of-processing tests, and 0.40 for memory tests. In contrast, studies of other populations estimate an average heritability of 0.50 for general cognitive ability.[18]

In 2006, The New York Times Magazine listed about three quarters as a figure held by the majority of studies.[21]

Thanks to Jayman

In plain speak, this means that intelligence in healthy adults is about 70-80% genetic and the rest seems to be random chance (like whether you were dropped on your head as a child or had enough iodine). So far, no one has proven that things like whole language vs. phonics instruction or two parents vs. one in the household have any effect on IQ, though they might effect how happy you are.

(Childhood IQ is much more amenable to environmental changes like “good teachers,” but these effects wear off as soon as children aren’t being forced to go to school every day.)

A full discussion of the scientific literature is beyond our current scope, but if you aren’t convinced about the heritability of IQ–including math abilities–I urge you to go explore the literature yourself–you might want to start with some of Jayman’s relevant FAQs on the subject.

Everett uses experiments done with the Piraha to support his claim that mathematical ability is culturally dependent, but this is dependent on is claim that the Piraha are cognitively identical to the rest of us in innate mathematical ability. Given that normal people are not cognitively identical in innate mathematical abilities, and mathematical abilities vary, on average, between groups (this is why people buy “Singapore Math” books and not “Congolese Math,”) there is no particular reason to assume Piraha and non-Piraha are cognitively identical. Further, there’s no reason to assume that any two groups are cognitively identical.

Mathematics only really got started when people invented agriculture, as they needed to keep track of things like “How many goats do I have?” or “Have the peasants paid their taxes?” A world in which mathematical ability is useful will select for mathematical ability; a world where it is useless cannot select for it.

Everett may still be correct that you wouldn’t be able to count if you hadn’t been taught how, but the Piraha can’t prove that one way or another. He would first have to show that Piraha who are raised in numerate cultures (say, by adoption,) are just as good at calculus as people from Singapore or Japan, but he cites no adoption studies nor anything else to this end. (And adoption studies don’t even show that for the groups we have studied, like whites, blacks, or Asians.)

Let me offer a cognitive contrast:

The Piraha are an anumeric, illiterate culture. They have encountered both letters and numbers, but not adopted them.

The Cherokee were once illiterate: they had no written language. Around 1809, an illiterate Cherokee man, Sequoyah, observed whites reading and writing letters. In a flash of insight, Sequoyah understand the concept of “use a symbol to encode a sound” even without being taught to read English. He developed his own alphabet (really a syllabary) for writing Cherokee sounds and began teaching it to others. Within 5 years of the syllabary’s completion, a majority of the Cherokee were literate; they soon had their own publishing industry producing Cherokee-language books and newspapers.

The Cherokee, though illiterate, possessed the innate ability to be literate, if only exposed to the cultural idea of letters. Once exposed, literacy spread rapidly–instantly, in human cultural evolution terms.

By contrast, the Piraha, despite their desire to adopt numbers, have not been able to do so.

(Yet. With enough effort, the Piraha probably can learn to count–after all, there are trained parrots who can count to 8. It would be strange if they permanently underperformed parrots. But it’s a difficult journey.)

That all said, I would like to make an anthropological defense of anumeracy: numeracy, as in ascribing exact values to specific items, is more productive in some contexts than others.

Do you keep track of the exact values of things you give your spouse, children, or close friends? If you invite a neighbor over for a meal, do you mark down what it cost to feed them and then expect them to feed you the same amount in return? Do you count the exact value of gifts and give the same value in return?

In Kabloona, de Poncin discusses the quasi-communist nature of the Eskimo economic system. For the Eskimo, hunter-gatherers living in the world’s harshest environment, the unit of exchange isn’t the item, but survival. A man whom you keep alive by giving him fish today is a man who can keep you alive by giving you fish tomorrow. Declaring that you will only give a starving man five fish because he previously gave you five fish will do you no good at all if he starves from not enough fish and can no longer give you some of his fish when he has an excess. The fish have, in this context, no innate, immutable value–they are as valuable as the life they preserve. To think otherwise would kill them.

It’s only when people have goods to trade, regularly, with strangers, that they begin thinking of objects as having defined values that hold steady over different transactions. A chicken is more valuable if I am starving than if I am not, but it has an identical value whether I am trading it for nuts or cows.

So it is not surprising that most agricultural societies have more complicated number systems than most hunter-gatherer societies. As Everett explains:

Led by Patience Epps of the University of Texas, a team of linguists recently documented the complexity of the number systems in many of the world’s languages. In particular, the researchers were concerned with the languages’ upper numerical limit–the highest quantity with a specific name. …

We are fond of coining new names for numbers in English, but the largest commonly used number name is googol (googolplex I define as an operation,) though there are bigger one’s like Graham’s.

The linguistic team in question found the upper numerical limits in 193 languages of hunter-gatherer cultures in Australia, Amazonia, Africa, and North America. Additionally, they examined the upper limits of 204 languages spoken by agriculturalists and pastoralists in these regions. They discovered that the languages of hunter-gatherer groups generally have low upper limits. This is particularly true in Australia and Amazonia, the regions with so-called pure hunter-gatherer subsistence strategies.

In the case of the Australian languages, the study in question observed that more than 80 percent are limited numerically, with the highest quantity represetned in such cases being only 3 or 4. Only one Australian language, Gamilaraay, was found to have an upper limit above 10, an dits highest number is for 20. … The association [between hunter-gathering and limited numbers] is also robust in South America and Amazonia more specifically. The languages of hunter-gatherer cultures in this region generally have upper limits below ten. Only one surveyed language … Huaorani, has numbers for quantities greater than 20. Approximately two-thirds of the languages of such groups in the region have upper limits of five or less, while one-third have an upper limit of 10. Similarly, about two-thirds of African hunter-gatherer languages have upper limits of 10 or less.

There are a few exceptions–agricultural societies with very few numbers, and hunter-gatherers with relatively large numbers of numbers, but:

…there are no large agricultural states without elaborate number systems, now or in recorded history.

So how did the first people develop numbers? Of course we don’t know, but Everett suggests that at some point we began associating collections of things, like shells, with the cluster of fingers found on our hands. One finger, one shell; five fingers, five shells–easy correspondences. Once we mastered five, we skipped forward to 10 and 20 rather quickly.

Everett proposes that some numeracy was a necessary prerequisite for agriculture, as agricultural people would need to keep track of things like seasons and equinoxes in order to know when to plant and harvest. I question this on the grounds that I myself don’t look at the calendar and say, “Oh look, it’s the equinox, I’d better plant my garden!” but instead look outside and say, “Oh, it’s getting warm and the grass is growing again, I’d better get busy.” The harvest is even more obvious: I harvest when the plants are ripe.

Of course, I live in a society with calendars, so I can’t claim that I don’t look at the calendar. I look at the calendar almost every day to make sure I have the date correct. So perhaps I am using my calendrical knowledge to plan my planting schedule without even realizing it because I am just so used to looking at the calendar.

“What man among you, if he has 100 sheep and has lost 1 of them, does not leave the 99 in the open pasture and go after the one which is lost until he finds it? When he has found it, he lays it on his shoulders, rejoicing.” Luke 15:3-5

Rather than develop numbers and then start planting barley and millet, I propose that humans first domesticated animals, like pigs and goats. At first people were content to have “a few,” “some,” or “many” animals, but soon they were inspired to keep better track of their flocks.

By the time we started planting millet and wheat (a couple thousand years later,) we were probably already pretty good at counting sheep.

Our fondness for tracking astronomical cycles, I suspect, began for less utilitarian reasons: they were there. The cycles of the sun, moon, and other planets were obvious and easy to track, and we wanted to figure out what they meant. We put a ton of work into tracking equinoxes and eclipses and the epicycles of Jupiter and Mars (before we figured out heliocentrism.) People ascribed all sorts of import to these cycles (“Communicator Mercury is retrograde in outspoken Sagittarius from December 3-22, mixing up messages and disrupting pre-holiday plans.”) that turned out to be completely wrong. Unless you’re a fisherman or sailor, the moon’s phases don’t make any difference in your life; the other planets’ cycles turned out to be completely useless unless you’re trying to send a space probe to visit them. Eclipses are interesting, but don’t have any real effects. For all of the effort we’ve put into astronomy, the most important results have been good calendars to keep track of dates and allow us to plan multiple years into the future.

Speaking of dates, let’s continue this discussion in a week–on the next Anthropology Friday.

*Footnote: Even though I don’t think the Piraha prove as much as Everett thinks they do, that doesn’t mean Everett is completely wrong. Maybe already having number words is (in the vast majority of cases) a necessary precondition for learning to count.

One potentially illuminating case Everett didn’t explore is how young children in numerate culture acquire numbers. Obviously they grow up in an environment with numbers, but below a certain age can’t really use them. Can children at these ages duplicate lines of objects or patterns? Or do they master that behavior only after learning to count?

Back in October I commented on Schiller and Peterson’s claim in Count on Math (a book of math curriculum ideas for toddlers and preschoolers) that young children must learn mathematical “foundation” concepts in a particular order, ie:

Developmental sequence is fundamental to children’s ability to build conceptual understanding. … The chapters in this book present math in a developmental sequence that provides children a natural transition from one concept to the next, preventing gaps in their understanding. …

When children are allowed to explore many objects, they begin to recognize similarities and differences of objects.

When children can determine similarities and differences, they can classify objects.

When children can classify objects, they can see similarities and difference well enough to recognize patterns.

When children can recognize, copy, extend and create patterns, they can arrange sets in a one-to-one relationship.

When children can match objects one to one, they can compare sets to determine which have more and which have less.

When children can compare sets, they can begin to look at the “manyness” of one set and develop number concepts.

This developmental sequence provides a conceptual framework that serves as a springboard to developing higher level math skills.

The Count on Math curriculum doesn’t even introduce the numbers 1-5 until week 39 for 4 year olds (3 year olds are never introduced to numbers) and numbers 6-10 aren’t introduced until week 37 for the 5 year olds!

Note that Schiller and Everett are arguing diametrical opposites–Everett says the ability to count to three and distinguish the “manyness” of sets is instinctual, present even in infants, but that the ability to copy patterns and match items one-to-one only comes after long acquaintance and practice with counting, specifically number words.

Schiller claims that children only develop the ability to distinguish manyness and count to three after learning to copy patterns and match items one-to-one.

As I said back in October, I think Count on Math’s claim is pure bollocks. If you miss the “comparing sets” day at preschool, you aren’t going to end up unable to multiply. The Piraha may not prove as much as Everett wants them to, but the neuroscience and animal studies he cites aren’t worthless. In general, I distrust anyone who claims that you must introduce this long a set of concepts in this strict an order just to develop a basic competency that the vast majority of people seem to acquire without difficulty.

Of course, Lynne Peterson is a real teacher with a real teacher’s certificate and a BA in … it doesn’t say, and Pam Schiller was Vice President of Professional Development for the Early childhood Division at McGraw Hill publishers and president of the Southern Early Childhood Association. She has a PhD in… it doesn’t say. Here’s some more on Dr. Schiller’s many awards. So maybe they know better than Everett, who’s just an anthropologist. But Everett has some actual evidence on his side.

But I’m a parent who has watched several children learn to count… and Schiller and Peterson are wrong.