Neuropolitics: “Openness” and Cortical Thickness

Brain anatomy–gyruses

I ran across an interesting study today, on openness, creativity, and cortical thickness.

The psychological trait of “openness”–that is, willingness to try new things or experiences–correlates with other traits like creativity and political liberalism. (This might be changing as cultural shifts are changing what people mean by “liberalism,” but it was true a decade ago and is still statistically true today.)

Researchers took a set of 185 intelligent people studying or employed in STEM, gave them personality tests intended to measure “openness,” and then scanned their brains to measure cortical thickness in various areas.

According to Citizendium, “Cortical thickness” is:

a brain morphometric measure used to describe the combined thickness of the layers of the cerebral cortex in mammalian brains, either in local terms or as a global average for the entire brain. Given that cortical thickness roughly correlates with the number of neurons within an ontogenetic column, it is often taken as indicative of the cognitive abilities of an individual, albeit the latter are known to have multiple determinants.

According to the article in PsyPost, reporting on the study:

“The key finding from our study was that there was a negative correlation between Openness and cortical thickness in regions of the brain that underlie memory and cognitive control. This is an interesting finding because typically reduced cortical thickness is associated with decreased cognitive function, including lower psychometric measures of intelligence,” Vartanian told PsyPost.”

Citizendium explains some of the issues associated with too thin or thick cortexs:

Typical values in adult humans are between 1.5 and 3 mm, and during aging, a decrease (also known as cortical thinning) on the order of about 10 μm per year can be observed [3]. Deviations from these patterns can be used as diagnostic indicators for brain disorders: While Alzheimer’s disease, even very early on, is characterized by pronounced cortical thinning[4], Williams syndrome patients exhibit an increase in cortical thickness of about 5-10% in some regions [5], and lissencephalic patients show drastic thickening, up to several centimetres in occipital regions[6].

Obviously people with Alzheimer’s have difficulty remembering things, but people with Williams Syndrome also tend to be low-IQ and have difficulty with memory.

Of course, the cortex is a big region, and it may matter specifically where yours is thin or thick. In this study, the thinness was found in the left middle frontal gyrus, left middle temporal gyrus, left superior temporal gyrus, left inferior parietal lobule, right inferior parietal lobule, and right middle temporal gyrus.

These are areas that, according to the study’s authors, have previously been shown to be activated during neuroimaging studies of creativity, and so the specific places you would expect to see some kind of anatomical difference in particularly creative people.

Hypothetically, maybe reduced cortical thickness, in some people, makes them worse at remembering specific kinds of experiences–and thus more likely to try new ones. For example, if I remember very strongly that I like Tomato Sauce A, and that I hate Tomato Sauce B, I’m likely to just keep buying A. But if every time I go to the store I only have a vague memory that there was a tomato sauce I really liked, I might just pick sauces at random–eventually trying all of them.

The authors have a different interpretation:

“We believe that the reason why Openness is associated with reduced cortical thickness is that this condition reduces the person’s ability to filter the contents of thought, thereby facilitating greater immersion in the sensory, cognitive, and emotional information that might otherwise have been filtered out of consciousness.”

So, less meta-brain, more direct experience? Less worrying, more experiencing?

The authors note a few problems with the study (for starters, it is hardly a representative sample of either “creative” people nor exceptional geniuses, being limited to people in STEM,) but it is still an interesting piece of data and I hope to see more like it.

 

If you want to read more about brains, I recommend Kurzweil’s How to Create a Mind, which I am reading now. It goes into some detail on relevant brain structures, and how they work to create memories, recognize patterns, and let us create thought. (Incidentally, the link goes to Amazon Smile, which raises money for charity; I selected St. Jude’s.)

Advertisements

Maybe America is too Dumb for Democracy: A Review of Nichols’s The Death of Expertise

For today’s Cathedral Round-Up, I finally kept my commitment to review Tom Nichols’s The Death of Expertise: The Campaign against Established Knowledge and Why it Matters. It was better than I expected, (though that isn’t saying much.)

Make no mistake: Nichols is annoyingly arrogant. He draws a rather stark line between “experts” (who know things) and everyone else (who should humbly limit themselves to voting between options defined for them by the experts.) He implores people to better educate themselves in order to be better voters, but has little patience for autodidacts and bloggers like myself who are actually trying.

But arrogance alone doesn’t make someone wrong.

Nichols’s first thesis is simple: most people are too stupid or ignorant to second-guess experts or even contribute meaningfully to modern policy discussions. How can people who can’t find Ukraine on a map or think we should bomb the fictional city of Agrabah contribute in any meaningful way to a discussion of international policy?

It was one thing, in 1776, to think the average American could vote meaningfully on the issues of the day–a right they took by force, by shooting anyone who told them they couldn’t. Life was less complicated in 1776, and the average person could master most of the skills they needed to survive (indeed, pioneers on the edge of the frontier had to be mostly self-sufficient in order to survive.) Life was hard–most people engaged in long hours of heavy labor plowing fields, chopping wood, harvesting crops, and hauling necessities–but could be mastered by people who hadn’t graduated from elementary school.

But the modern industrial (or post-industrial) world is much more complicated than the one our ancestors grew up in. Today we have cars (maybe even self-driving cars), electrical grids and sewer systems, atomic bombs and fast food. The speed of communication and transportation have made it possible to chat with people on the other side of the earth and show up on their doorstep a day later. The amount if specialized, technical knowledge necessary to keep modern society running would astonish the average caveman–even with 15+ years of schooling, the average person can no longer build a house, nor even produce basic necessities like clothes or food. Most of us can’t even make a pencil.

Even experts who are actually knowledgeable about their particular area may be completely ignorant of fields outside of their expertise. Nichols speaks Russian, which makes him an expert in certain Russian-related matters, but he probably knows nothing about optimal high-speed rail networks. And herein lies the problem:

The American attachment to intellectual self-reliance described by Tocqueville survived for nearly a century before falling under a series of assaults from both within and without. Technology, universal secondary education, the proliferation of specialized expertise, and the emergence of the United States a a global power in the mid-twentieth century all undermined the idea… that the average American was adequately equipped either for the challenges of daily life or for running the affairs of a large country.

… the political scientist Richard Hofstadter wrote that “the complexity of modern life has steadily whittled away the functions the ordinary citizen can intelligently and competently perform for himself.”

… Somin wrote in 2015 that the “size and complexity of government” have mad it “more difficult for voters with limited knowledge to monitor and evaluate the government’s many activities. The result is a polity in which the people often cannot exercise their sovereignty responsibly and effectively.”

In other words, society is now too complex and people too stupid for democracy.

Nichols’s second thesis is that people used to trust experts, which let democracy function, but to day they are less trusting. He offers no evidence other than his general conviction that this change has happened.

He does, however, detail the way he thinks that 1. People have been given inflated egos about their own intelligence, and 2. How our information-delivery system has degenerated into misinformational goo, resulting in the trust-problems he believes we are having These are interesting arguments and worth examining.

A bit of summary:

Indeed, maybe the death of expertise is a sign of progress. Educated professionals, after all, no longer have a stranglehold on knowledge. The secrets of life are no longer hidden in giant marble mausoleums… in the past, there was less tress between experts and laypeople, but only because citizen were simply unable to challenge experts in any substantive way. …

Participation in political, intellectual, and scientific life until the early twentieth century was far more circumscribed, with debates about science, philosophy, and public policy all conducted by a small circle of educated males with pen and ink. Those were not exactly the Good Old Days, and they weren’t that long ago. The time when most people didn’t finish highschool, when very few went to college, and only a tiny fraction of the population entered professions is still within living memory of many Americans.

Aside from Nichols’s insistence that he believes modern American notions about gender and racial equality, I get the impression that he wouldn’t mind the Good Old Days of genteel pen-and-ink discussions between intellectuals. However, I question his claim that participation in political life was far more circumscribed–after all, people voted, and politicians liked getting people to vote for them. People anywhere, even illiterate peasants on the frontier or up in the mountains like to gather and debate about God, politics, and the meaning of life. The question is less “Did they discuss it?” and more “Did their discussions have any effect on politics?” Certainly we can point to abolition, women’s suffrage, prohibition, and the Revolution itself as heavily grass-roots movements.

But continuing with Nichols’s argument:

Social changes only in the past half century finally broke down old barriers of race, class, and sex not only between Americans and general but also between uneducated citizens and elite expert in particular. A wide circle of debate meant more knowledge but more social friction. Universal education, the greater empowerment of women and minorities, the growth of a middle class, and increased social mobility all threw a minority of expert and the majority of citizens into direct contact, after nearly two centuries in which they rarely had to interact with each other.

And yet the result has not been a greater respect for knowledge, but the growth of an irrational conviction among Americans that everyone is as smart as everyone else.

Nichols is distracting himself with the reflexive racial argument; the important change he is highlighting isn’t social but technical.

I’d like to quote a short exchange from Our Southern Highlanders, an anthropologic-style text written about Appalachia about a century ago:

The mountain clergy, as a general rule, are hostile to “book larnin’,” for “there ain’t no Holy Ghost in it.” One of them who had spent three months at a theological school told President Frost, “Yes, the seminary is a good place ter go and git rested up, but ’tain’t worth while fer me ter go thar no more ’s long as I’ve got good wind.”

It used to amuse me to explain how I knew that the earth was a sphere; but one day, when I was busy, a tiresome old preacher put the everlasting question to me: “Do you believe the earth is round?” An impish perversity seized me and I answered, “No—all blamed humbug!” “Amen!” cried my delighted catechist, “I knowed in reason you had more sense.”

But back to Nichols, who really likes the concept of expertise:

One reason claims of expertise grate on people in a democracy is that specialization is necessarily exclusive. WHen we study a certain area of knowledge or spend oulives in a particular occupation, we not only forego expertise in othe jobs or subjects, but also trust that other pople in the community know what they’re doing in thei area as surely as we do in our own. As much as we might want to go up to the cockpit afte the engine flames out to give the pilots osme helpful tips, we assume–in part, ebcause wehave to–that tye’re better able to cope with the problem than we are. Othewise, our highly evovled society breaks down int island sof incoherence, where we spend our time in poorly infomed second-guessing instead of trusting each other.

This would be a good point to look at data on overall trust levels, friendship, civic engagement, etc (It’s down. It’s all down.) and maybe some explanations for these changes.

Nichols talks briefly about the accreditation and verification process for producing “experts,” which he rather likes. There is an interesting discussion in the economics literature on things like the economics of trust and information (how do websites signal that they are trustworthy enough that you will give them your credit card number and expect to receive items you ordered a few days later?) which could apply here, too.

Nichols then explores a variety of cognitive biases, such a superstitions, phobias, and conspiracy theories:

Conspiracy theories are also a way for people to give meaning to events that frighten them. Without a coherent explanation for why terrible thing happen to innocent people, they would have to accept such occurence as nothing more than the random cruelty either of an uncaring universe or an incomprehensible deity. …

The only way out of this dilemma is to imagine a world in which our troubles are the fault of powerful people who had it within their power to avert such misery. …

Just as individual facing grief and confusion look for reasons where none may exist, so, too, will entire societies gravitate toward outlandish theories when collectively subjected to a terrible national experience. Conspiracy theories and flawed reasoning behind them …become especially seductive “in any society that has suffered an epic, collectively felt trauma. In the aftermath, millions of people find themselves casting about for an answer to the ancient question of why bad things happen to good people.” …

Today, conspiracy theories are reaction mostly to the economic and social dislocations of globalization…This is not a trivial obstacle when it comes to the problems of expert engagement with the public: nearly 30 percent of Americans, for example, think “a secretive elite with a globalist agenda is conspiring to eventually rule the world” …

Obviously stupid. A not-secret elite with a globalist agenda already rules the world.

and 15 percent think media or government add secret mind controlling technology to TV broadcasts. (Another 15 percent aren’t sure about the TV issue.)

It’s called “advertising” and it wants you to buy a Ford.

Anyway, the problem with conspiracy theories is they are unfalsifiable; no amount of evidence will ever convince a conspiracy theorist that he is wrong, for all evidence is just further proof of how nefariously “they” are constructing the conspiracy.

Then Nichols gets into some interesting matter on the difference between stereotypes and generalizations, which segues nicely into a tangent I’d like to discuss, but it probably deserves its own post. To summarize:

Sometimes experts know things that contradict other people’s political (or religious) beliefs… If an “expert” finding or field accords with established liberal values, EG, the implicit association test found that “everyone is a little bit racist,” which liberals already believed, then there is an easy mesh between what the academics believe and the rest of their social class.

If their findings contradict conservative/low-class values, EG, when professors assert that evolution is true and “those low-class Bible-thumpers in Oklahoma are wrong,” sure, they might have a lot of people who disagree with them, but those people aren’t part of their own social class/the upper class, and so not a problem. If anything, high class folks love such finding, because it gives them a chance to talk about how much better they are than those low-class people (though such class conflict is obviously poisonous in a democracy where those low-class people can still vote to Fuck You and Your Global Warming, Too.)

But if the findings contradict high-class/liberal politics, then the experts have a real problem. EG, if that same evolution professor turns around and says, “By the way, race is definitely biologically real, and there are statistical differences in average IQ between the races,” now he’s contradicting the political values of his own class/the upper class, and that becomes a social issue and he is likely to get Watsoned.

For years folks at Fox News (and talk radio) have lambasted “the media” even though they are part of the media; SSC recently discussed “can something be both popular and silenced?

Jordan Peterson isn’t unpopular or “silenced” so much as he is disliked by upper class folks and liked by “losers” and low class folks, despite the fact that he is basically an intellectual guy and isn’t peddling a low-class product. Likewise, Fox News is just as much part of The Media as NPR, (if anything, it’s much more of the Media) but NPR is higher class than Fox, and Fox doesn’t like feeling like its opinions are being judged along this class axis.

For better or for worse (mostly worse) class politics and political/religious beliefs strongly affect our opinions of “experts,” especially those who say things we disagree with.

But back to Nichols: Dunning-Kruger effect, fake cultural literacy, and too many people at college. Nichols is a professor and has seen college students up close and personal, and has a low opinion of most of them. The massive expansion of upper education has not resulted in a better-educated, smarter populace, he argues, but a populace armed with expensive certificates that show the sat around a college for 4 years without learning much of anything. Unfortunately, beyond a certain level, there isn’t a lot that more school can do to increase people’s basic aptitudes.

Colleges get money by attracting students, which incentivises them to hand out degrees like candy–in other words, students are being lied to about their abilities and college degrees are fast becoming the participation trophies for the not very bright.

Nichols has little sympathy for modern students:

Today, by contrast, students explode over imagined slights that are not even remotely int eh same category as fighting for civil rights or being sent to war. Students now build majestic Everests from the smallest molehills, and they descend into hysteria over pranks and hoaxes. In the midst of it all, the students are learning that emotions and volume can always defeat reason and substance, thus building about themselves fortresses that no future teacher, expert, or intellectual will ever be able to breach.

At Yale in 2015, for example, a house master’s wife had the temerity to tell minority students to ignore Halloween costumes they thought offensive. This provoked a campus wide temper tantrum that included professors being shouted down by screaming student. “In your position as master,” one student howled in a professor’s face, “it is your job to create a place of comfort and home for the students… Do you understand that?!”

Quietly, the professor said, “No, I don’t agree with that,” and the student unloaded on him:

“Then why the [expletive] did you accept the position?! Who the [expletive] hired you?! You should step down! If that is what you think about being a master you should step down! It is not about creating an intellectual space! It is not! Do you understand that? It’s about creating a home here. You are not doing that!” [emphasis added]

Yale, instead of disciplining students in violation of their own norms of academic discourse, apologized to the tantrum throwers. The house master eventually resigned from his residential post…

To faculty everywhere, the lesson was obvious: the campus of a top university is not a place for intellectual exploration. It is a luxury home, rented for four to six years, nine months at a time, by children of the elite who may shout at faculty as if they’re berating clumsy maids in a colonial mansion.

The incident Nichols cites (and similar ones elsewhere,) are not just matters of college students being dumb or entitled, but explicitly racial conflicts. The demand for “safe spaces” is easy to ridicule on the grounds that students are emotional babies, but this misses the point: students are carving out territory for themselves on explicitly racial lines, often by violence.

Nichols, though, either does not notice the racial aspect of modern campus conflicts or does not want to admit publicly to doing so.

Nichols moves on to blame TV, especially CNN, talk radio, and the internet for dumbing down the quality of discourse by overwhelming us with a deluge of more information than we can possibly process.

Referring back to Auerswald and The Code Economy, if automation creates a bifurcation in industries, replacing a moderately-priced, moderately available product with a stream of cheap, low-quality product on the one hand and a trickle of expensive, high-quality products on the other, good-quality journalism has been replaced with a flood of low-quality crap. The high-quality end is still working itself out.

Nichols opines:

Accessing the Internet can actually make people dumber than if they had never engaged a subject at all. The very act of searching for information makes people think they’ve learned something,when in fact they’re more likely to be immersed in yet more data they do not understand. …

When a group of experimental psychologists at Yale investigated how people use the internet, they found that “people who search for information on the Web emerge from the process with an inflated sense of how much they know–even regarding topic that are unrelated to the ones they Googled.” …

How can exposure to so much information fail to produce at least some kind of increased baseline of knowledge, if only by electronic osmosis? How can people read so much yet retain so little? The answer is simple: few people are actually reading what they find.

As a University College of London (UCL) study found, people don’t actually read the articles they encounter during a search on the Internet. Instead, they glance at the top line or the first few sentences and then move on. Internet users, the researchers noted, “Are not reading online in the traditional sense; indeed, there are signs that new forms of ‘reading’ are emerging as users ‘power browse’ horizontally through titles, contents pages and abstracts going for quick wins. It almost seems that they go online to avoid reading in the traditional sense.”

The internet’s demands for instant updates, for whatever headlines generate the most clicks (and thus advertising revenue), has upset the balance of speed vs. expertise in the newsroom. No longer have reporters any incentive to spend long hours carefully writing a well-researched story when such stories pay less than clickbait headlines about racist pet costumes and celebrity tweets.

I realize it seems churlish to complain about the feast of news and information brought to us by the Information Age, but I’m going to complain anyway. Changes in journalism, like the increased access to the Internet and to college education, have unexpectedly corrosive effects on the relationship between laypeople and experts. Instead of making people better informed, much of what passes for news in the twenty-first century often leaves laypeople–and sometimes experts–even more confused and ornery.

Experts face a vexing challenge: there’s more news available, and yet people seem less informed, a trend that goes back at least a quarter century. Paradoxically, it is a problem that is worsening rather than dissipating. …

As long ago as 1990, for example, a study conducted by the Pew Trust warned that disengagement from important public questions was actually worse among people under thirty, the group that should have been most receptive to then-emerging sources of information like cable television and electronic media. This was a distinct change in American civic culture, as the Pew study noted:

“Over most of the past five decades younger members of the public have been at least as well informed as older people. In 1990, that is no longer the case. … “

Those respondents are now themselves middle-aged, and their children are faring no better.

If you were 30 in 1990, you were born in 1960, to parents who were between the ages of 20 and 40 years old, that is, born between 1920 and 1940.

Source: Audacious Epigone

Fertility for the 1920-1940 cohort was strongly dysgenic. So was the 1940-50 cohort. The 1900-1919 cohort at least had the Flynn Effect on their side, but later cohorts just look like an advertisement for idiocracy.

Nichols ends with a plea that voters respect experts (and that experts, in turn, be humble and polite to voters.) After all, modern society is too complicated for any of us to be experts on everything. If we don’t pay attention to expert advice, he warns, modern society is bound to end in ignorant goo.

The logical inconsistency is that Nichols believes in democracy at all–he thinks democracy can be saved if ignorant people vote within a range of options as defined by experts like himself, eg, “What vaccine options are best?” rather than “Should we have vaccines at all?”

The problem, then, is that whoever controls the experts (or controls which expert opinions people hear) controls the limits of policy debates. This leads to people arguing over experts, which leads right back where we are today. As long as there are politics, “expertise” will be politicized, eg:

Look at any court case in which both sides bring in their own “expert” witnesses. Both experts testify to the effect that their side is correct. Then the jury is left to vote on which side had more believable experts. This is like best case scenario voting, and the fact that the voters are dumb and don’t understand what the experts are saying and are obviously being mislead in many cases is still a huge problem.

If politics is the problem, then perhaps getting rid of politics is the solution. Just have a bunch of Singapores run by Lee Kwan Yews, let folks like Nichols advise them, and let the common people “vote with their feet” by moving to the best states.

The problem with this solution is that “exit” doesn’t exist in the modern world in any meaningful way, and there are significant reasons why ordinary people oppose open borders.

Conclusion: 3/5 stars. It’s not a terrible book, and Nichols has plenty of good points, but “Americans are dumb” isn’t exactly fresh territory and much has already been written on the subject.

Book Club: The Code Economy ch. 1

Greetings! Grab a cup of coffee and pull up a chair. Tea is also good. Today we’re diving into chapter one of Philip Auerswald’s The Code Economy, “Jobs: Divide and Coordinate.”

I wish this chapter had been much longer; we speed through almost 2.5 million years of cognitive evolution in a couple of pages.

The earliest hominins had about the same short-term memory as a modern-day chimpanzee, which is to say they could keep track of only two operations at a time. … Our methods for creating tools gradually became more sophisticated, until we were using the tools we created to produce other tools in a repetitive and predictable manner. These processes for creating stone tools were among humanity’s first production algorithms-that is, the earliest code. They appeared almost simultaneously in human communities in most part of the world around 40,000 BC.

Footnote:

…[E.O.] Wilson refers to this phenomenon more broadly as the discovery of eusocial behavior… Wilson situates the date far earlier in human history than I do here. I chose 50,000 years [ago] because my focus is on the economy. it is clear that an epochal change in society occurred roughly 10,000 years BCE, when humans invented agriculture in six parts of the world simultaneously. The fact of this simultaneity directly suggests the advance of code represented by the invention of agriculture was part of a forward movement of code that started much earlier.

What do you think? Does the simultaneous advent of behavioral modernity–or eusociality–in far-flung human groups roughly 50,000 years ago, followed by the simultaneous advent of agriculture in several far-flung groups about 10,000 years ago speak to the existence of some universal, underlying process? Why did so many different groups of people develop similar patterns of life and technology around the same time, despite some of them being highly isolated? Was society simply inevitable?

The caption on the photo is similarly interesting:

Demand on Short-Term Working Memory in the Production of an Obsidian Axe [from Read and van der Leeuw, 2015] … We can relate the concepts invoked in the prodcution of stone tools to the number of dimensions involved and thereby to the size of short-term workign memory (STWM) required for the prodction of the kind of stone tools that exemplify each stage in hominin evolution. …

Just hitting the end of a pebble once to create one edge, as in the simplest tools, they calculate requires holding three items in the working memory. Removing several flakes to create a longer edge (a line), takes STWM 4; working an entire side takes STWM 5; and working both sides of the stone in preparation for knapping flakes from the third requires both an ability to think about the pebble’s shape in three dimensions and STWM 7.

(The Wikipedia article on Lithic Reduction has a lovely animation of the technique.)

It took about 2 million years to proceed from the simplest tools (working memory: 3) to the most complex (working memory: 7.) Since the Neolithic, our working memory hasn’t improved–most of us are still limited to a mere 7 items in our working memory, just enough to remember a phone number if you already know the area code.

All of our advances since the Neolithic, Auerswald argues, haven’t been due to an increase in STWM, but our ability to build complexity externally: through code. And it was this invention of code that really made society take off.

By about 10,000 BCE, humans had formed the first villages… Villages were the precursors of modern-day business firms in that they were durable association built around routines. … the advance of code at the village level through the creation of new technological combinations set into motion the evolution from simplicity to complexity that has resulted in the modern economy.

It was in the village, then, that code began to evolve.

What do you think? Are Read and van der Leeuw just retroactively fitting numbers 3-7 to the tools, or do they really show an advance in working memory? Is the village really the source of most code evolution? And who do you think is more correct, Herbert Spencer or Thomas Malthus?

Auerswald then forward to 1557, with the first use of the word “job” (spelled “jobbe,” most likely from “gobbe,” or lump.)

The advent of the “jobbe” a a lump of work was to the evolution of modern society something like what the first single-celled organism was to the evolution of life.

!

The “jobbe” contrasted with the obligation to perform labor continuously and without clearly defined roles–slavery, serfdom, indentured servitude, or even apprenticeship–as had been the norm throughout human history.

Did the Black Death help create the modern “job market” by inspiring Parliament to pass the Statute of Laborers?

I am reminded here of a passage from Gulick’s Evolution of the Japanese, Social and Psychic, (published in 1903):

The idea of making a bargain when two persons entered upon some particular piece of work, the one as employer, the other as employed, was entirely repugnant to the older generation, since it was assumed that their relations as inferior and superior should determine their financial relations; the superior would do what was right, and the inferior should accept what the superior might give without a question or a murmur. Among the samurai, where the arrangement is between equals, bargaining or making fixed and fast terms which will hold to the end, and which may be carried to the courts in case of differences, was a thing practically unknown in the older civilization. Everything of a business nature was left to honor, and was carried on in mutual confidence.

“A few illustrations of this spirit of confidence from my own experience may not be without interest. On first coming to Japan, I found it usual for a Japanese who wished to take a jinrikisha to call the runner and take the ride without making any bargain, giving him at the end what seemed right. And the men generally accepted the payment without question. I have found that recently, unless there is some definite understanding arrived at before the ride, there is apt to be some disagreement, the runner presuming on the hold he has, by virtue of work done, to get more than is customary. This is especially true in case the rider is a foreigner. Another set of examples in which astonishing simplicity and confidence were manifested was in the employment of evangelists. I have known several instances in which a full correspondence with an evangelist with regard to his employment was carried on, and the settlement finally concluded, and the man set to work without a word said about money matters. It need hardly be said that no foreigner took part in that correspondence. …

“This confidence and trustfulness were the product of a civilization resting on communalistic feudalism; the people were kept as children in dependence on their feudal lord; they had to accept what he said and did; they were accustomed to that order of things from the beginning and had no other thought; on the whole too, without doubt, they received regular and kindly treatment. Furthermore, there was no redress for the peasant in case of harshness; it was always the wise policy, therefore, for him to accept whatever was given without even the appearance of dissatisfaction. This spirit was connected with the dominance of the military class. Simple trustfulness was, therefore, chiefly that of the non-military classes.

“Since the overthrow of communal feudalism and the establishment of an individualistic social order, necessitating personal ownership of property, and the universal use of money, trustful confidence is rapidly passing away.

We still identify ourselves with our profession–“I am a doctor” or “I am a paleontologist”–but much less so than in the days when “Smith” wasn’t a name.

Auerswald progresses to the modern day:

In the past two hundred years, the complexity of human economic organization has  increased by orders of magnitude. Death rates began to fall rapidly in the middle of the nineteenth century, due to a combination of increased agricultural output, improved hygiene, and the beginning of better medical practices–all different dimensions of the advance of code…. Greater numbers of people living in greater density than ever before accelerated the advance of code.

Sounds great, but:

By the twentieth century, the continued advance of code necessitated the creation of government bureaucracies and large corporations that employed vast numbers of people. These organizations executed code of sufficient complexity that it was beyond the capacity of any single individual to master.

I’ve often wondered if the explosion of communist disasters at the beginning of the 20th century occurred because we could imagine a kind of nation-wide code for production and consumption and we had the power to implement it, but we didn’t actually have the capabilities and tools necessary to make it work.

We can imagine Utopia, but we cannot reach it.

Auerswald delineates two broad categories of “epochal change” as a result of the code-explosion of the past two centuries: First, our capabilities grew. Second:

“we have, to an increasing degree, ceded to other people–and to code itself–authority and autonomy, which for millennia we had kept unto ourselves and our immediate tribal groups as uncodified cultural norms.”

Before the “job”, before even the “trade,” people lived and worked far more at their own discretion. Hoeing fields or gathering yams might be long and tedious work, but at least you didn’t have to pee in a bottle because Amazon didn’t give you time for bathroom breaks.

Every time voters demand that politicians “bring back the jobs” or politicians promise to create them, we are implicitly stating that the vast majority of people are no longer capable of making their own jobs. (At least, not jobs that afford a modern lifestyle.) The Appalachians lived in utter poverty (the vast majority of people before 1900 lived in what we would now call utter poverty), but they did not depend on anyone else to create “jobs” for them; they cleared their own land, planted their own corn, hunted their own hogs, and provided for their own needs.

Today’s humans are (probably not less intelligent nor innately capable than the average Appalachian of 1900, but the economy (and our standards of living) are much more complex. The average person no longer has the capacity to drive job growth in such a complicated system, but the solution isn’t necessarily for everyone to become smarter. After all, large, complicated organizations need hundreds of employees who are not out founding their own companies.

But this, in turn, means all of those employees–and even the companies themselves–are dependent on forces far outside their control, like Chinese monetary policy or the American electoral cycle. And this, in turn, raises demand for some kind of centralized, planned system to protect the workers from economic hardship and ensure that everyone enjoys a minimum standard of living.

Microstates suggest themselves as a way to speed the evolution of economic code by increasing the total number of organisms in the ecosystem.

With eusociality, man already became a political (that is, polis) animal around 10,000 or 40,000 or perhaps 100,000 years ago, largely unable to subsist on his own, absent the tribe. We do not seem to regret this ill-remembered transition very much, but what about the current one? Is the job-man somehow less human, less complete than the tradesman? Do we feel that something essential to the human spirit has been lost in defining and routinizing our daily tasks down to the minute, forcing men to bend to the timetables of factories and international corporations? Or have we, through the benefits of civilization (mostly health improvements) gained something far richer?

When did language evolve?

The smartest non-human primates, like Kanzi the bonobo and Koko the gorilla, understand about 2,000 to 4,000 words. Koko can make about 1,000 signs in sign language and Kanzi can use about 450 lexigrams (pictures that stand for words.) Koko can also make some onomatopoetic words–that is, she can make and use imitative sounds in conversation.

A four year human knows about 4,000 words, similar to an exceptional gorilla. An adult knows about 20,000-35,000 words. (Another study puts the upper bound at 42,000.)

Somewhere along our journey from ape-like hominins to homo sapiens sapiens, our ancestors began talking, but exactly when remains a mystery. The origins of writing have been amusingly easy to discover, because early writers were fond of very durable surfaces, like clay, stone, and bone. Speech, by contrast, evaporates as soon as it is heard–leaving no trace for archaeologists to uncover.

But we can find the things necessary for speech and the things for which speech, in turn, is necessary.

The main reason why chimps and gorillas, even those taught human language, must rely on lexigrams or gestures to communicate is that their voiceboxes, lungs, and throats work differently than ours. Their semi-arborial lifestyle requires using the ribs as a rigid base for the arm and shoulder muscles while climbing, which in turn requires closing the lungs while climbing to provide support for the ribs.

Full bipedalism released our early ancestors from the constraints on airway design imposed by climbing, freeing us to make a wider variety of vocalizations.

Now is the perfect time to break out my file of relevant human evolution illustrations:

Source: Scientific American What Makes Humans Special

We humans split from our nearest living ape relatives about 7-8 million years ago, but true bipedalism may not have evolved for a few more million years. Since there are many different named hominins, here is a quick guide:

Source: Macroevolution in and Around the Hominin Clade

Australopithecines (light blue in the graph,) such as the famous Lucy, are believed to have been the first fully bipedal hominins, although, based on the shape of their toes, they may have still occasionally retreated into the trees. They lived between 4 and 2 million years ago.

Without delving into the myriad classification debates along the lines of “should we count this set of skulls as a separate species or are they all part of the natural variation within one species,” by the time the homo genus arises with H Habilis or H. Rudolfensis around 2.8 million years ag, humans were much worse at climbing trees.

Interestingly, one direction humans have continued evolving in is up.

Oldowan tool

The reliable production of stone tools represents an enormous leap forward in human cognition. The first known stone tools–Oldowan–are about 2.5-2.6 million years old and were probably made by homo Habilis. These simple tools are typically shaped only one one side.

By the Acheulean–1.75 million-100,000 years ago–tool making had become much more sophisticated. Not only did knappers shape both sides of both the tops and bottoms of stones, but they also made tools by first shaping a core stone and then flaking derivative pieces from it.

The first Acheulean tools were fashioned by h Erectus; by 100,000 years ago, h Sapiens had presumably taken over the technology.

Flint knapping is surprisingly difficult, as many an archaeology student has discovered.

These technological advances were accompanied by steadily increasing brain sizes.

I propose that the complexities of the Acheulean tool complex required some form of language to facilitate learning and teaching; this gives us a potential lower bound on language around 1.75 million years ago. Bipedalism gives us an upper bound around 4 million years ago, before which our voice boxes were likely more restricted in the sounds they could make.

A Different View

Even though “homo Sapiens” has been around for about 300,000 years (or so we have defined the point where we chose to differentiate between our species and the previous one,) “behavioral modernity” only emerged around 50,000 years ago (very awkward timing if you know anything about human dispersal.)

Everything about behavioral modernity is heavily contested (including when it began,) but no matter how and when you date it, compared to the million years or so it took humans to figure out how to knap the back side of a rock, human technologic advance has accelerated significantly over the past 100,000 and even moreso over the past 50,000 and even 10,000.

Fire was another of humanity’s early technologies:

Claims for the earliest definitive evidence of control of fire by a member of Homo range from 1.7 to 0.2 million years ago (Mya).[1] Evidence for the controlled use of fire by Homo erectus, beginning some 600,000 years ago, has wide scholarly support.[2][3] Flint blades burned in fires roughly 300,000 years ago were found near fossils of early but not entirely modern Homo sapiens in Morocco.[4] Evidence of widespread control of fire by anatomically modern humans dates to approximately 125,000 years ago.[5]

What prompted this sudden acceleration? Noam Chomsky suggests that it was triggered by the evolution of our ability to use and understand language:

Noam Chomsky, a prominent proponent of discontinuity theory, argues that a single chance mutation occurred in one individual in the order of 100,000 years ago, installing the language faculty (a component of the mind–brain) in “perfect” or “near-perfect” form.[6]

(Pumpkin Person has more on Chomsky.)

More specifically, we might say that this single chance mutation created the capacity for figurative or symbolic language, as clearly apes already have the capacity for very simple language. It was this ability to convey abstract ideas, then, that allowed humans to begin expressing themselves in other abstract ways, like cave painting.

I disagree with this view on the grounds that human groups were already pretty widely dispersed by 100,000 years ago. For example, Pygmies and Bushmen are descended from groups of humans who had already split off from the rest of us by then, but they still have symbolic language, art, and everything else contained in the behavioral modernity toolkit. Of course, if a trait is particularly useful or otherwise successful, it can spread extremely quickly (think lactose tolerance,) and neither Bushmen nor Pygmies were 100% genetically isolated for the past 250,000 years, but I simply think the math here doesn’t work out.

However, that doesn’t mean Chomsky isn’t on to something. For example, Johanna Nichols (another linguist,) used statistical models of language differentiation to argue that modern languages split around 100,000 years ago.[31] This coincides neatly with the upper bound on the Out of Africa theory, suggesting that Nichols may actually have found the point when language began differentiating because humans left Africa, or perhaps she found the origin of the linguistic skills necessary to accomplish humanity’s cross-continental trek.

Philip Lieberman and Robert McCarthy looked at the shape of Neanderthal, homo Erectus, early h Sapiens and modern h Sapiens’ vocal tracts:

In normal adults these two portions of the SVT form a right angle to one another and are approximately equal in length—in a 1:1 proportion. Movements of the tongue within this space, at its midpoint, are capable of producing tenfold changes in the diameter of the SVT. These tongue maneuvers produce the abrupt diameter changes needed to produce the formant frequencies of the vowels found most frequently among the world’s languages—the “quantal” vowels [i], [u], and [a] of the words “see,” “do,” and “ma.” In contrast, the vocal tracts of other living primates are physiologically incapable of producing such vowels.

(Since juvenile humans are shaped differently than adults, they pronounce sounds slightly differently until their voiceboxes fully develop.)

Their results:

…Neanderthal necks were too short and their faces too long to have accommodated equally proportioned SVTs. Although we could not reconstruct the shape of the SVT in the Homo erectus fossil because it does not preserve any cervical vertebrae, it is clear that its face (and underlying horizontal SVT) would have been too long for a 1:1 SVT to fit into its head and neck. Likewise, in order to fit a 1:1 SVT into the reconstructed Neanderthal anatomy, the larynx would have had to be positioned in the Neanderthal’s thorax, behind the sternum and clavicles, much too low for effective swallowing. …

Surprisingly, our reconstruction of the 100,000-year-old specimen from Israel, which is anatomically modern in most respects, also would not have been able to accommodate a SVT with a 1:1 ratio, albeit for a different reason. … Again, like its Neanderthal relatives, this early modern human probably had an SVT with a horizontal dimension longer than its vertical one, translating into an inability to reproduce the full range of today’s human speech.

It was only in our reconstruction of the most recent fossil specimens—the modern humans postdating 50,000 years— that we identified an anatomy that could have accommodated a fully modern, equally proportioned vocal tract.

Just as small children who can’t yet pronounce the letter “r” can nevertheless make and understand language, I don’t think early humans needed to have all of the same sounds as we have in order to communicate with each other. They would have just used fewer sounds.

The change in our voiceboxes may not have triggered the evolution of language, but been triggered by language itself. As humans began transmitting more knowledge via language, humans who could make more sounds could utter a greater range of words perhaps had an edge over their peers–maybe they were seen as particularly clever, or perhaps they had an easier time organizing bands of hunters and warriors.

One of the interesting things about human language is that it is clearly simultaneously cultural–which language you speak is entirely determined by culture–and genetic–only humans can produce language in the way we do. Even the smartest chimps and dolphins cannot match our vocabularies, nor imitate our sounds. Human infants–unless they have some form of brain damage–learn language instinctually, without conscious teaching. (Insert reference to Steven Pinker.)

Some kind of genetic changes were obviously necessary to get from apes to human language use, but exactly what remains unclear.

A variety of genes are associated with language use, eg FOXP2. H Sapiens and chimps have different versions of the FOXP2 gene, (and Neanderthals have a third, but more similar to the H Sapiens version than the chimp,) but to my knowledge we have yet to discover exactly when the necessary mutations arose.

Despite their impressive skulls and survival in a harsh, novel climate, Neanderthals seem not to have engaged in much symbolic activity, (though to be fair, they were wiped out right about the time Sapiens really got going with its symbolic activity.) Homo Sapiens and Homo Nanderthalis split around 800-400,000 years ago–perhaps the difference in our language genes ultimately gave Sapiens the upper hand.

Just as farming appears to have emerged relatively independently in several different locations around the world at about the same time, so behavioral modernity seems to have taken off in several different groups around the same time. Of course we can’t rule out the possibility that these groups had some form of contact with each other–peaceful or otherwise–but it seems more likely to me that similar behaviors emerged in disparate groups around the same time because the cognitive precursors necessary for those behaviors had already begun before they split.

Based on genetics, the shape of their larynges, and their cultural toolkits, Neanderthals probably did not have modern speech, but they may have had something similar to it. This suggests that at the time of the Sapiens-Neanderthal split, our common ancestor possessed some primitive speech capacity.

By the time Sapiens and Neanderthals encountered each other again, nearly half a million years later, Sapiens’ language ability had advanced, possibly due to further modification of FOXP2 and other genes like it, plus our newly modified voiceboxes, while Neanderthals’ had lagged. Sapiens achieved behavioral modernity and took over the planet, while Neanderthals disappeared.

 

Anthropology Friday: Numbers and the Making of Us, by Caleb Everett, pt 3

Welcome back to our discussion of Numbers and the Making of Us: Counting and the Course of Human Cultures, by Caleb Everett.

The Pirahã are a small tribe (about 420) of Amazonian hunter-gatherers whose language is nearly unique: it has no numbers, and you can whistle it. Everett spent much of his childhood among the Piraha because his parents were missionaries, which probably makes him one of the world’s foremost non-Piraha experts on the Piraha.

Occasionally as a child I would wake up in the jungle to the cacophony of people sharing their dreams with one another–impromptu monologues followed by spurts of intense feedback. The people in question, a fascinating (to me anyhow) group known as the Piraha, are known to wake up and speak to their immediate neighbors at all hours of the night. … the voices suggested the people in the village were relaxed and completely unconcerned with my own preoccupations. …

The Piraha village my family lived in was reachable via a one-week sinuous trip along a series of Amazonian tributaries, or alternatively by a one-or flight in a Cessna single-engine airplane.

Piraha culture is, to say the least, very different from ours. Everett cites studies of Piraha counting ability in support of his idea that our ability to count past 3 is a culturally acquired process–that is, we can only count because we grew up in a numeric society where people taught us numbers, and the Piraha can’t count because they grew up in an anumeric society that not only lacks numbers, but lacks various other abstractions necessary for helping make sense of numbers. Our innate, genetic numerical abilities, (the ability to count to three and distinguish between small and large amounts,) he insists, are the same.

You see, the Piraha really can’t count. Line up 3 spools of thread and ask them to make an identical line, and they can do it. Line up 4 spools of thread, and they start getting the wrong number of spools. Line up 10 spools of thread, and it’s obvious that they’re just guessing and you’re wasting your time. Put five nuts in a can, then take two out and ask how many nuts are left: you get a response on the order of “some.”*

And this is not for lack of trying. The Piraha know other people have these things called “numbers.” They once asked Everett’s parents, the missionaries, to teach them numbers so they wouldn’t get cheated in trade deals. The missionaries tried for 8 months to teach them to count to ten and add small sums like 1 + 1. It didn’t work and the Piraha gave up.

Despite these difficulties, Everett insists that the Piraha are not dumb. After all, they survive in a very complex and demanding environment. He grew up with them; many of the are his personal friends and he regards them as mentally normal people with the exact same genetic abilities as everyone else who just lack the culturally-acquired skill of counting.

After all, on a standard IQ scale, someone who cannot even count to 4 would be severely if not profoundly retarded, institutionalized and cared for by others. The Piraha obviously live independently, hunt, raise, and gather their own food, navigate through the rainforest, raise their own children, build houses, etc. They aren’t building aqueducts, but they are surviving perfectly well outside of an institution.

Everett neglects the possibility that the Piraha are otherwise normal people who are innately bad at math.

Normally, yes, different mental abilities correlate because they depend highly on things like “how fast is your brain overall” or “were you neglected as a child?” But people also vary in their mental abilities. I have a friend who is above average in reading and writing abilities, but is almost completely unable to do math. This is despite being raised in a completely numerate culture, going to school, etc.

This is a really obvious and life-impairing problem in a society like ours, where you have to use math to function; my friend has been marked since childhood as “not cognitively normal.” It would be a completely invisible non-problem in a society like the Piraha, who use no math at all; in Piraha society, my friend would be “a totally normal guy” (or at least close.)

Everett states, explicitly, that not only are the Piraha only constrained by culture, but other people’s abilities are also directly determined by their cultures:

What is probably more remarkable about the relevant studies, though, is that they suggest that climbing any rungs of the arithmetic ladder requires numbers. How high we climb the ladder is not the result of our own inherent intelligence, but a result of the language we speak and of the culture we are born into. (page 136)

This is an absurd claim. Even my own children, raised in identically numerate environments and possessing, on the global scale, nearly identical genetics, vary in math abilities. You are probably not identical in abilities to your relatives, childhood classmates, next door neighbors, spouse, or office mates. We observe variations in mathematical abilities within cultures, families, cities, towns, schools, and virtually any group you chose that isn’t selected for math abilities. We can’t all do calculus just because we happen to live in a culture with calculus textbooks.

In fact, there is an extensive literature (which Everett ignores) on the genetics of intelligence:

Various studies have found the heritability of IQ to be between 0.7 and 0.8 in adults and 0.45 in childhood in the United States.[6][18][19] It may seem reasonable to expect that genetic influences on traits like IQ should become less important as one gains experiences with age. However, that the opposite occurs is well documented. Heritability measures in infancy are as low as 0.2, around 0.4 in middle childhood, and as high as 0.8 in adulthood.[7] One proposed explanation is that people with different genes tend to seek out different environments that reinforce the effects of those genes.[6] The brain undergoes morphological changes in development which suggests that age-related physical changes could also contribute to this effect.[20]

A 1994 article in Behavior Genetics based on a study of Swedish monozygotic and dizygotic twins found the heritability of the sample to be as high as 0.80 in general cognitive ability; however, it also varies by trait, with 0.60 for verbal tests, 0.50 for spatial and speed-of-processing tests, and 0.40 for memory tests. In contrast, studies of other populations estimate an average heritability of 0.50 for general cognitive ability.[18]

In 2006, The New York Times Magazine listed about three quarters as a figure held by the majority of studies.[21]

Thanks to Jayman

In plain speak, this means that intelligence in healthy adults is about 70-80% genetic and the rest seems to be random chance (like whether you were dropped on your head as a child or had enough iodine). So far, no one has proven that things like whole language vs. phonics instruction or two parents vs. one in the household have any effect on IQ, though they might effect how happy you are.

(Childhood IQ is much more amenable to environmental changes like “good teachers,” but these effects wear off as soon as children aren’t being forced to go to school every day.)

A full discussion of the scientific literature is beyond our current scope, but if you aren’t convinced about the heritability of IQ–including math abilities–I urge you to go explore the literature yourself–you might want to start with some of Jayman’s relevant FAQs on the subject.

Everett uses experiments done with the Piraha to support his claim that mathematical ability is culturally dependent, but this is dependent on is claim that the Piraha are cognitively identical to the rest of us in innate mathematical ability. Given that normal people are not cognitively identical in innate mathematical abilities, and mathematical abilities vary, on average, between groups (this is why people buy “Singapore Math” books and not “Congolese Math,”) there is no particular reason to assume Piraha and non-Piraha are cognitively identical. Further, there’s no reason to assume that any two groups are cognitively identical.

Mathematics only really got started when people invented agriculture, as they needed to keep track of things like “How many goats do I have?” or “Have the peasants paid their taxes?” A world in which mathematical ability is useful will select for mathematical ability; a world where it is useless cannot select for it.

Everett may still be correct that you wouldn’t be able to count if you hadn’t been taught how, but the Piraha can’t prove that one way or another. He would first have to show that Piraha who are raised in numerate cultures (say, by adoption,) are just as good at calculus as people from Singapore or Japan, but he cites no adoption studies nor anything else to this end. (And adoption studies don’t even show that for the groups we have studied, like whites, blacks, or Asians.)

Let me offer a cognitive contrast:

The Piraha are an anumeric, illiterate culture. They have encountered both letters and numbers, but not adopted them.

The Cherokee were once illiterate: they had no written language. Around 1809, an illiterate Cherokee man, Sequoyah, observed whites reading and writing letters. In a flash of insight, Sequoyah understand the concept of “use a symbol to encode a sound” even without being taught to read English. He developed his own alphabet (really a syllabary) for writing Cherokee sounds and began teaching it to others. Within 5 years of the syllabary’s completion, a majority of the Cherokee were literate; they soon had their own publishing industry producing Cherokee-language books and newspapers.

The Cherokee, though illiterate, possessed the innate ability to be literate, if only exposed to the cultural idea of letters. Once exposed, literacy spread rapidly–instantly, in human cultural evolution terms.

By contrast, the Piraha, despite their desire to adopt numbers, have not been able to do so.

(Yet. With enough effort, the Piraha probably can learn to count–after all, there are trained parrots who can count to 8. It would be strange if they permanently underperformed parrots. But it’s a difficult journey.)

That all said, I would like to make an anthropological defense of anumeracy: numeracy, as in ascribing exact values to specific items, is more productive in some contexts than others.

Do you keep track of the exact values of things you give your spouse, children, or close friends? If you invite a neighbor over for a meal, do you mark down what it cost to feed them and then expect them to feed you the same amount in return? Do you count the exact value of gifts and give the same value in return?

In Kabloona, de Poncin discusses the quasi-communist nature of the Eskimo economic system. For the Eskimo, hunter-gatherers living in the world’s harshest environment, the unit of exchange isn’t the item, but survival. A man whom you keep alive by giving him fish today is a man who can keep you alive by giving you fish tomorrow. Declaring that you will only give a starving man five fish because he previously gave you five fish will do you no good at all if he starves from not enough fish and can no longer give you some of his fish when he has an excess. The fish have, in this context, no innate, immutable value–they are as valuable as the life they preserve. To think otherwise would kill them.

It’s only when people have goods to trade, regularly, with strangers, that they begin thinking of objects as having defined values that hold steady over different transactions. A chicken is more valuable if I am starving than if I am not, but it has an identical value whether I am trading it for nuts or cows.

So it is not surprising that most agricultural societies have more complicated number systems than most hunter-gatherer societies. As Everett explains:

Led by Patience Epps of the University of Texas, a team of linguists recently documented the complexity of the number systems in many of the world’s languages. In particular, the researchers were concerned with the languages’ upper numerical limit–the highest quantity with a specific name. …

We are fond of coining new names for numbers in English, but the largest commonly used number name is googol (googolplex I define as an operation,) though there are bigger one’s like Graham’s.

The linguistic team in question found the upper numerical limits in 193 languages of hunter-gatherer cultures in Australia, Amazonia, Africa, and North America. Additionally, they examined the upper limits of 204 languages spoken by agriculturalists and pastoralists in these regions. They discovered that the languages of hunter-gatherer groups generally have low upper limits. This is particularly true in Australia and Amazonia, the regions with so-called pure hunter-gatherer subsistence strategies.

In the case of the Australian languages, the study in question observed that more than 80 percent are limited numerically, with the highest quantity represetned in such cases being only 3 or 4. Only one Australian language, Gamilaraay, was found to have an upper limit above 10, an dits highest number is for 20. … The association [between hunter-gathering and limited numbers] is also robust in South America and Amazonia more specifically. The languages of hunter-gatherer cultures in this region generally have upper limits below ten. Only one surveyed language … Huaorani, has numbers for quantities greater than 20. Approximately two-thirds of the languages of such groups in the region have upper limits of five or less, while one-third have an upper limit of 10. Similarly, about two-thirds of African hunter-gatherer languages have upper limits of 10 or less.

There are a few exceptions–agricultural societies with very few numbers, and hunter-gatherers with relatively large numbers of numbers, but:

…there are no large agricultural states without elaborate number systems, now or in recorded history.

So how did the first people develop numbers? Of course we don’t know, but Everett suggests that at some point we began associating collections of things, like shells, with the cluster of fingers found on our hands. One finger, one shell; five fingers, five shells–easy correspondences. Once we mastered five, we skipped forward to 10 and 20 rather quickly.

Everett proposes that some numeracy was a necessary prerequisite for agriculture, as agricultural people would need to keep track of things like seasons and equinoxes in order to know when to plant and harvest. I question this on the grounds that I myself don’t look at the calendar and say, “Oh look, it’s the equinox, I’d better plant my garden!” but instead look outside and say, “Oh, it’s getting warm and the grass is growing again, I’d better get busy.” The harvest is even more obvious: I harvest when the plants are ripe.

Of course, I live in a society with calendars, so I can’t claim that I don’t look at the calendar. I look at the calendar almost every day to make sure I have the date correct. So perhaps I am using my calendrical knowledge to plan my planting schedule without even realizing it because I am just so used to looking at the calendar.

“What man among you, if he has 100 sheep and has lost 1 of them, does not leave the 99 in the open pasture and go after the one which is lost until he finds it? When he has found it, he lays it on his shoulders, rejoicing.” Luke 15:3-5

Rather than develop numbers and then start planting barley and millet, I propose that humans first domesticated animals, like pigs and goats. At first people were content to have “a few,” “some,” or “many” animals, but soon they were inspired to keep better track of their flocks.

By the time we started planting millet and wheat (a couple thousand years later,) we were probably already pretty good at counting sheep.

Our fondness for tracking astronomical cycles, I suspect, began for less utilitarian reasons: they were there. The cycles of the sun, moon, and other planets were obvious and easy to track, and we wanted to figure out what they meant. We put a ton of work into tracking equinoxes and eclipses and the epicycles of Jupiter and Mars (before we figured out heliocentrism.) People ascribed all sorts of import to these cycles (“Communicator Mercury is retrograde in outspoken Sagittarius from December 3-22, mixing up messages and disrupting pre-holiday plans.”) that turned out to be completely wrong. Unless you’re a fisherman or sailor, the moon’s phases don’t make any difference in your life; the other planets’ cycles turned out to be completely useless unless you’re trying to send a space probe to visit them. Eclipses are interesting, but don’t have any real effects. For all of the effort we’ve put into astronomy, the most important results have been good calendars to keep track of dates and allow us to plan multiple years into the future.

Speaking of dates, let’s continue this discussion in a week–on the next Anthropology Friday.

*Footnote: Even though I don’t think the Piraha prove as much as Everett thinks they do, that doesn’t mean Everett is completely wrong. Maybe already having number words is (in the vast majority of cases) a necessary precondition for learning to count.

One potentially illuminating case Everett didn’t explore is how young children in numerate culture acquire numbers. Obviously they grow up in an environment with numbers, but below a certain age can’t really use them. Can children at these ages duplicate lines of objects or patterns? Or do they master that behavior only after learning to count?

Back in October I commented on Schiller and Peterson’s claim in Count on Math (a book of math curriculum ideas for toddlers and preschoolers) that young children must learn mathematical “foundation” concepts in a particular order, ie:

Developmental sequence is fundamental to children’s ability to build conceptual understanding. … The chapters in this book present math in a developmental sequence that provides children a natural transition from one concept to the next, preventing gaps in their understanding. …

When children are allowed to explore many objects, they begin to recognize similarities and differences of objects.

When children can determine similarities and differences, they can classify objects.

When children can classify objects, they can see similarities and difference well enough to recognize patterns.

When children can recognize, copy, extend and create patterns, they can arrange sets in a one-to-one relationship.

When children can match objects one to one, they can compare sets to determine which have more and which have less.

When children can compare sets, they can begin to look at the “manyness” of one set and develop number concepts.

This developmental sequence provides a conceptual framework that serves as a springboard to developing higher level math skills.

The Count on Math curriculum doesn’t even introduce the numbers 1-5 until week 39 for 4 year olds (3 year olds are never introduced to numbers) and numbers 6-10 aren’t introduced until week 37 for the 5 year olds!

Note that Schiller and Everett are arguing diametrical opposites–Everett says the ability to count to three and distinguish the “manyness” of sets is instinctual, present even in infants, but that the ability to copy patterns and match items one-to-one only comes after long acquaintance and practice with counting, specifically number words.

Schiller claims that children only develop the ability to distinguish manyness and count to three after learning to copy patterns and match items one-to-one.

As I said back in October, I think Count on Math’s claim is pure bollocks. If you miss the “comparing sets” day at preschool, you aren’t going to end up unable to multiply. The Piraha may not prove as much as Everett wants them to, but the neuroscience and animal studies he cites aren’t worthless. In general, I distrust anyone who claims that you must introduce this long a set of concepts in this strict an order just to develop a basic competency that the vast majority of people seem to acquire without difficulty.

Of course, Lynne Peterson is a real teacher with a real teacher’s certificate and a BA in … it doesn’t say, and Pam Schiller was Vice President of Professional Development for the Early childhood Division at McGraw Hill publishers and president of the Southern Early Childhood Association. She has a PhD in… it doesn’t say. Here’s some more on Dr. Schiller’s many awards. So maybe they know better than Everett, who’s just an anthropologist. But Everett has some actual evidence on his side.

But I’m a parent who has watched several children learn to count… and Schiller and Peterson are wrong.

Local Optima, Diversity, and Patchwork

Local optima–or optimums, if you prefer–are an illusion created by distance. A man standing on the hilltop at (approximately) X=2 may see land sloping downward all around himself and think that he is at the highest point on the graph.

But hand him a telescope, and he discovers that the fellow standing on the hilltop at X=4 is even higher than he is. And hand the fellow at X=4 a telescope, and he’ll discover that X=6 is even higher.

A global optimum is the best possible way of doing something; a local optimum can look like a global optimum because all of the other, similar ways of doing the same thing are worse. To get from a local optimum to a global optimum, you might have to endure a significant trough of things going worse before you reach your destination. (Those troughs would be the points X=3.03 and X=5.02 on the graph.) If the troughs are short and shallow enough, people can accidentally power their way through. If long and deep enough, people get stuck.

The introduction of new technology, exposure to another culture’s solutions, or even random chance can expose a local optimum and propel a group to cross that trough.

For example, back in 1400, Europeans were perfectly happy to get their Chinese silks, spices, and porcelains via the overland Silk Road. But with the fall of Constantinople to the Turks in 1453, the Silk Road became more fragmented and difficult (ie dangerous, ie expensive) to travel. The increased cost of the normal road prompted Europeans to start exploring other, less immediately profitable trade routes–like the possibility of sailing clear around the world, via the ocean, to the other side of China.

Without the eastern trade routes first diminishing in profitability, it wouldn’t have been economically viable to explore and develop the western routes. (With the discovery of the Americas, in the process, a happy accident.)

West Hunter (Greg Cochran) writes frequently about local optima; here’s an excerpt on plant domestication:

The reason that a few crops account for the great preponderance of modern agriculture is that a bird in the hand – an already-domesticated, already- optimized crop – feeds your family/makes money right now, while a potentially useful yet undomesticated crop doesn’t. One successful domestication tends to inhibit others that could flourish in the same niche. Several crops were domesticated in the eastern United States, but with the advent of maize and beans ( from Mesoamerica) most were abandoned. Maybe if those Amerindians had continued to selectively breed sumpweed for a few thousand years, it could have been a contender: but nobody is quite that stubborn.

Teosinte was an unpromising weed: it’s hard to see why anyone bothered to try to domesticate it, and it took a long time to turn it into something like modern maize. If someone had brought wheat to Mexico six thousand years ago, likely the locals would have dropped maize like a hot potato. But maize ultimately had advantages: it’s a C4 plant, while wheat is C3: maize yields can be much higher.

Teosinte is the ancestor of modern corn. Cochran’s point is that in the domestication game, wheat is a local optimum; given the wild ancestors of wheat and corn, you’d develop a better, more nutritious variety of wheat first and probably just abandon the corn. But if you didn’t have wheat and you just had corn, you’d keep at the corn–and in the end, get an even better plant.

(Of course, corn is a success story; plenty of people domesticated plants that actually weren’t very good just because that’s what they happened to have.)

Japan in 1850 was a culturally rich, pre-industrial, feudal society with a strong isolationist stance. In 1853, the Japanese discovered that the rest of the world’s industrial, military technology was now sufficiently advanced to pose a serious threat to Japanese sovereignty. Things immediately degenerated, culminating in the Boshin War (civil war, 1868-9,) but with the Meiji Restoration Japan embarked on an industrialization crash-course. By 1895, Japan had kicked China’s butt in the First Sino-Japanese War and the Japanese population doubled–after holding steady for centuries–between 1873 and 1935. (From 35 to 70 million people.) By the 1930s, Japan was one of the world’s most formidable industrial powers, and today it remains an economic and technological powerhouse.

Clearly the Japanese people, in 1850, contained the untapped ability to build a much more complex and advanced society than the one they had, and it did not take much exposure to the outside world to precipitate a total economic and technological revolution.

Sequoyah’s syllabary, showing script and print forms

A similar case occurred in 1821 when Sequoyah, a Cherokee man, invented his own syllabary (syllable-based alphabet) after observing American soldiers reading letters. The Cherokee quickly adopted Sequoyah’s writing system–by 1825, the majority of Cherokee were literate and the Cherokee had their own printing industry. Interestingly, although some of the Cherokee letters look like Latin, Greek, or Cyrillic letters, there is no correspondence in sound, because Sequoyah could not read English. He developed his entire syllabary after simply being exposed to the idea of writing.

The idea of literacy has occurred independently only a few times in human history; the vast majority of people picked up alphabets from someone else. Our Alphabet comes from the Latins who got it from the Greeks who adopted it from the Phoenicians who got it from some proto-canaanite script writers, and even then literacy spread pretty slowly. The Cherokee, while not as technologically advanced as Europeans at the time, were already a nice agricultural society and clearly possessed the ability to become literate as soon as they were exposed to the idea.

When I walk around our cities, I often think about what their ruins will look like to explorers in a thousand years
We also pass a ruin of what once must have been a grand building. The walls are marked with logos from a Belgian University. This must have once been some scientific study centre of sorts.”

By contrast, there are many cases of people being exposed to or given a new technology but completely lacking the ability to functionally adopt, improve, or maintain it. The Democratic Republic of the Congo, for example, is full of ruined colonial-era buildings and roads built by outsiders that the locals haven’t maintained. Without the Belgians, the infrastructure has crumbled.

Likewise, contact between Europeans and groups like the Australian Aboriginees did not result in the Aboriginees adopting European technology nor a new and improved fusion of Aboriginee and European tech, but in total disaster for the Aboriginees. While the Japanese consistently top the charts in educational attainment, Aboriginee communities are still struggling with low literacy rates, high dropout rates, and low employment–the modern industrial economy, in short, has not been kind to them.

Along a completely different evolutionary pathway, cephalopods–squids, octopuses, and their tentacled ilk–are the world’s smartest invertebrates. This is pretty amazing, given that their nearest cousins are snails and clams. Yet cephalopod intelligence only goes so far. No one knows (yet) just how smart cephalopods are–squids in particular are difficult to work with in captivity because they are active hunter/swimmers and need a lot more space than the average aquarium can devote–but their brain power appears to be on the order of a dog’s.

After millions of years of evolution, cephalopods may represent the best nature can do–with an invertebrate. Throw in a backbone, and an animal can get a whole lot smarter.

And in chemistry, activation energy is the amount of energy you have to put into a chemical system before a reaction can begin. Stable chemical systems essentially exist at local optima, and it can require the input of quite a lot of energy before you get any action out of them. For atoms, iron is the global–should we say universal?–optimum, beyond which reactions are endothermic rather than exothermic. In other words, nuclear fusion at the core of the sun ends with iron; elements heavier than iron can only be produced when stars explode.

So what do local optima have to do with diversity?

The current vogue for diversity (“Diversity is our greatest strength”) suggests that we can reach global optima faster by simply smushing everyone together and letting them compare notes. Scroll back to the Japanese case. Edo Japan had a nice culture, but it was also beset by frequent famines. Meiji Japan doubled its population. Giving everyone, right now, the same technology and culture would bring everyone up to the same level.

But you can’t tell from within if you are at a local or global optimum. That’s how they work. The Indians likely would have never developed corn had they been exposed to wheat early on, and subsequently Europeans would have never gotten to adopt corn, either. Good ideas can take a long time to refine and develop. Cultures can improve rapidly–even dramatically–by adopting each other’s good ideas, but they also need their own space and time to pursue their own paths, so that good but slowly developing ideas aren’t lost.

Which gets us back to Patchwork.

Book on a Friday: Squid Empire: The Rise and Fall of the Cephalopods by Danna Staaf

Danna Staaf’s Squid Empire: The Rise and Fall of the Cephalopods is about the evolution of squids and their relatives–nautiluses, cuttlefish, octopuses, ammonoids, etc. If you are really into squids or would like to learn more about squids, this is the book for you. If you aren’t big on reading about squids but want something that looks nice on your coffee table and matches your Cthulhu, Flying Spaghetti Monster, and 20,000 Leagues Under the Sea decor, this is the book for you. If you aren’t really into squids, you probably won’t enjoy this book.

Squids, octopuses, etc. are members of the class of cephalopods, just as you are a member of the class of mammals. Mammals are in the phylum of chordates; cephalopods are mollusks. It’s a surprising lineage for one of Earth’s smartest creatures–80% mollusk species are slugs and snails. If you think you’re surrounded by idiots, imagine how squids must feel.

The short story of cephalopodic evolution is that millions upon millions of years ago, most life was still stuck at the bottom of the ocean. There were some giant microbial mats, some slugs, some snails, some worms, and not a whole lot else. One of those snails figured out how to float by removing some of the salt from the water inside its shell, making itself a bit buoyant. Soon after its foot (all mollusks have a “foot”) split into multiple parts. The now-floating snail drifted over the seafloor, using its new tentacles to catch and eat the less-mobile creatures below it.

From here, cephalopods diversified dramatically, creating the famous ammonoids of fossil-dating lore.

Cross-section of a fossilized ammonite shell, revealing internal chambers and septa

Ammonoids are known primarily from their shells (which fossilize well) rather than their fleshy tentacle parts, (which fossilize badly). But shells we have in such abundance they can be easily used for dating other nearby fossils.

Ammonoids are obviously similar to their cousins, the lovely chambered nautiluses. (Please don’t buy nautilus shells; taking them out of their shells kills them and no one farms nautiluses so the shell trade is having a real impact on their numbers. We don’t need their shells, but they do.)

Ammonoids succeeded for millions of years, until the Creatceous extinction event that also took out the dinosaurs. The nautiluses survived–as the author speculates, perhaps because they lay large eggs with much more yolk that develop very slowly, infant nautiluses were able to wait out the event while ammonoids, with their fast-growing, tiny eggs dependent on feeding immediately after hatching simply starved in the upheaval.

In the aftermath, modern squids and octopuses proliferated.

How did we get from floating, shelled snails to today’s squishy squids?

The first step was internalization–cephalopods began growing their fleshy mantles over their shells instead of inside of them–in essence, these invertebrates became vertebrates. Perhaps this was some horrible genetic accident, but it worked out. These internalized shells gradually became smaller and thinner, until they were reduced to a flexible rod called a “pen” that runs the length of a squid’s mantle. (Cuttlefish still retain a more substantial bone, which is frequently collected on beaches and sold for birds to peck at for its calcium.)

With the loss of the buoyant shell, squids had to find another way to float. This they apparently achieved by filling themselves with ammonia salts, which makes them less dense than water but also makes their decomposition disgusting and renders them unfossilizable because they turn to mush too quickly. Octopuses, by contrast, aren’t full of ammonia and so can fossilize.

Since the book is devoted primarily to cephalopod evolution rather than modern cephalopods, it doesn’t go into much depth on the subject of their intelligence. Out of all the invertebrates, cephalopods are easily the most intelligent (perhaps really the only intelligent invertebrates). Why? If cephalopods didn’t exist, we might easily conclude that invertebrates can’t be intelligent–invertebrateness is somehow inimical to intelligence. After all, most invertebrates are about as intelligent as slugs. But cephalopods do exist, and they’re pretty smart.

The obvious answer is that cephalopods can move and are predatory, which requires bigger brains. But why are they the only invertebrates–apparently–who’ve accomplished the task?

But enough jabber–let’s let Mrs. Staaf speak:

I find myself obliged to address the perennial question: “octopuses” or “octopi”? Or, heaven help us, “octopodes”?

Whichever you like best. Seriously. Despite what you may have heard, “octopus” is neither ancient Greek nor Latin. Aristotle called the animal polypous for its “many feet.” The ancient Romans borrowed this word and latinized the spelling to polypus. It was much later that a Renaissance scientists coined and popularized the word “octopus,” using Greek roots for “eight” and “foot” but Latin spelling.

If the word had actually been Greek, it would be spelled octopous and pluralized octopodes. If translated into Latin, it might have become octopes and pluralized octopedes,  but more likely the ancient Roman would have simply borrowed the Greek word–as they did with poly pus. Those who perhaps wished to appear erudite used the Greek plural polypodes, while others favored a Latin ending and pluralized it polypi.

The latter is a tactic we English speakers emulate when we welcome “octopus” into our own language and pluralize it “octopuses” as I’ve chosen to do.

There. That settles it.

Dinosaurs are the poster children for evolution and extinction writ large…

Of course, not all of them did die. We know now that birds are simply modern dinosaurs, but out of habit we tend to reserve the word “dinosaur for the hug ancient creatures that went extinct at the end of the Cretaceous. After all, even if they had feathers, they seem so different from today’s finches and robins. For one thing, the first flying feathered dinosaurs all seem to have had four wings. There aren’t any modern birds with four wings.

Wesl… actually, domestic pigeons can be bred to grow feathers on their legs. Not fuzzy down, but long flight feathers, and along with these feathers their leg bones grow more winglike. The legs are still legs’ they can’t be used to fly like wings. They do, however, suggest a clear step along the road from four-winged dinosaurs to two-winged birds. The difference between pigeons with ordinary legs and pigeons with wing-legs is created by control switches in their DNA that alter the expression of two particular genes. These genes are found in all birds, indeed in all vertebrates,and so were most likely present in dinosaurs as well.

…and I’ve just discovered that almost all of my other bookmarks fell out of the book. Um.

So squid brains are shaped like donuts because their eating/jet propulsion tube runs through the middle of their bodies and thus through the middle of their brains. It seems like this could be a problem if the squid eats too much or eats something with sharp bits in it, but squids seem to manage.

Squids can also leap out of the water and fly through the air for some ways. Octopuses can carry water around in their mantles, allowing them to move on dry land for a few minutes without suffocating.

Since cephalopods are somewhat unique among mollusks for their ability to move quickly, they have a lot in common, genetically, with vertebrates. In essence, they are the most vertebrate-behaving of the mollusks. Convergent evolution.

The vampire squid, despite its name, is actually more of an octopus.

Let me quote from the chapter on sex and babies:

This is one arena in which cephalopods, both ancient and modern, are actually less alien than many aliens–even other mollusks. Slugs, for instance, are hermaphroditic, and in the course of impregnating each other their penises sometimes get tangled, so they chew them off. Nothing in the rest of this chapter will make you nearly that uncomfortable. …

The lovely argonaut octopus

In one living coleoid species, however, sex is blindingly obvious. Females of the octopus known as an argonaut are five times larger than males. (A killer whale is about five times larger than an average adult human, which in turn is about five times large than an opossum.)

This enormous size differential caught the attention of paleontologists who had noticed that many ammonoid species also came in two distinct size, which htey had dubbed microconch (little shell) and macroconch (big shell). Bot were clearly mature, as they had completed the juvenile part of the shell and constructed the final adult living chamber. After an initial flurry of debate, most researchers agreed to model ammonoid sex on modern argonauts, and began to call macroconchs females and microconcs males.

Some fossil nautiloids also come in macroconch and microchonch flavors, though it’s more difficult to be certain that both are adults…

However, the shells of modern nautiluses show the opposite pattern–males are somewhat large than females… Like the nautiloid shift from ten arms to many tens of arms, the pattern could certainly have evolved from a different ancestral condition. If we’re going to make that argument, though, we have to wonder when nautliloids switched from females to males as the larger sex, and why.

In modern species that have larger females, we usually assume the size difference has to do with making or brooding a lot of eggs.Female argonauts take it up a notch and actually secrete a shell-like brood chamber from their arms, using it to cradle numerous batch of eggs over their lifetime. meanwhile, each tiny male argonaut get ot mate only once. His hectocotylus is disposable–after being loaded with sperm and inserted into the female, it breaks off. …

By contrast, when males are the bigger sex, we often guess that the purpose is competition. Certainly many species of squid and cuttlefish have large males that battle for female attention on the mating grounds. They display outrageous skin patterns as they push, shove, and bite each other. Females do appear impressed; at least, they mate with the winning males and consent to be guarded by them. Even in these species, though, there are some mall males who exhibit a totally different mating strategy. While the big males strut their stuff, these small males quietly sidle up to the females, sometimes disguising themselves with female color patterns. This doesn’t put off the real females, who readily mate with these aptly named “sneaker males.” By their very nature, such obfuscating tactics are virtually impossible to glean from the fossil record…

More on octopus mating habits.

This, of course, reminded me of this graph:

In the majority of countries, women are more likely to be overweight than men (suggesting that our measure of “overweight” is probably flawed.) In some countries women are much more likely to be overweight, while in some countries men and women are almost equally likely to be overweight, and in just a few–the Czech Republic, Germany, Hungary, Japan, and barely France, men are more likely to be overweight.

Is there any rhyme or reason to this pattern? Surely affluence is related, but Japan, for all of its affluence, has very few overweight people at all, while Egypt, which is pretty poor, has far more overweight people. (A greater % of Egyptian women are overweight than American women, but American men are more likely to be overweight than Egyptian men.)

Of course, male humans are still–in every country–larger than females. Even an overweight female doesn’t necessarily weigh more than a regular male. But could the variation in male and female obesity rates have anything to do with historic mating strategies? Or is it completely irrelevant?

Back to the book:

Coleoid eyes are as complex as our own, with a lens for focusing light, a retina to detect it, and an iris to sharpen the image. … Despite their common complexity, though, there are some striking differences [between our and squid eyes]. For Example, our retina has a blind spot whee a bundle of nerves enters the eyeball before spreading out to connect to the font of every light receptor. By contrast, light receptors in the coleoid retina are innervated from behind, so there’s no “hole” or blind spot. Structural differences like this how that the two groups converged on similar solution through distinct evolutionary pathways.

Another significant difference is that fish went on to evolve color vision by increasing the variety of light-sensitive proteins in their eyes; coleoids never did and are probably color blind. I say “probably ” because the idea of color blindness in such colorful animals has flummoxed generations of scientists…

“I’m really more of a cuddlefish”

Color-blind or not, coleoids can definitely see something we humans are blind to: the polarization of light.

Sunlight normally consists of waves vibrating in all directions. but when these waves are reflected off certain surface, like water, they get organized and arrive at the retina vibrating in only one direction. We call this “glare” and we don’t like it, so we invented polarized sunglasses. … That’s pretty much all polarized sunglasses can do–block polaraized light. Sadly, they can’t help you decode the secret messages of cuttlefish, which have the ability to perform a sort of double0-talk with their skin, making color camouflage for the befit of polarization-blind predators while flashing polarized displays to their fellow cuttlefish.

That’s amazing. Here’s an article with more on cuttlefish vision and polarization.

Overall, I enjoyed this book. The writing isn’t the most thrilling, but the author has a sense of humor and a deep love for her subject. I recommend it to anyone with a serious hankering to know more about the evolution of squids, or who’d like to learn more about an ancient animal besides dinosaurs.

Are “Nerds” Just a Hollywood Stereotype?

Yes, MIT has a football team.

The other day on Twitter, Nick B. Steves challenged me to find data supporting or refuting his assertion that Nerds vs. Jocks is a false stereotype, invented around 1975. Of course, we HBDers have a saying–“all stereotypes are true,” even the ones about us–but let’s investigate Nick’s claim and see where it leads us.

(NOTE: If you have relevant data, I’d love to see it.)

Unfortunately, terms like “nerd,” “jock,” and “chad” are not all that well defined. Certainly if we define “jock” as “athletic but not smart” and nerd as “smart but not athletic,” then these are clearly separate categories. But what if there’s a much bigger group of people who are smart and athletic?

Or what if we are defining “nerd” and “jock” too narrowly? Wikipedia defines nerd as, “a person seen as overly intellectual, obsessive, or lacking social skills.” I recall a study–which I cannot find right now–which found that nerds had, overall, lower-than-average IQs, but that study included people who were obsessive about things like comic books, not just people who majored in STEM. Similarly, should we define “jock” only as people who are good at sports, or do passionate sports fans count?

For the sake of this post, I will define “nerd” as “people with high math/science abilities” and “jock” as “people with high athletic abilities,” leaving the matter of social skills undefined. (People who merely like video games or watch sports, therefore, do not count.)

Nick is correct on one count: according to Wikipedia, although the word “nerd” has been around since 1951, it was popularized during the 70s by the sitcom Happy Days. However, Wikipedia also notes that:

An alternate spelling,[10] as nurd or gnurd, also began to appear in the mid-1960s or early 1970s.[11] Author Philip K. Dick claimed to have coined the nurd spelling in 1973, but its first recorded use appeared in a 1965 student publication at Rensselaer Polytechnic Institute.[12][13] Oral tradition there holds that the word is derived from knurd (drunk spelled backward), which was used to describe people who studied rather than partied. The term gnurd (spelled with the “g”) was in use at the Massachusetts Institute of Technology by 1965.[14] The term nurd was also in use at the Massachusetts Institute of Technology as early as 1971 but was used in the context for the proper name of a fictional character in a satirical “news” article.[15]

suggesting that the word was already common among nerds themselves before it was picked up by TV.

But we can trace the nerd-jock dichotomy back before the terms were coined: back in 1921, Lewis Terman, a researcher at Stanford University, began a long-term study of exceptionally high-IQ children, the Genetic Studies of Genius aka the Terman Study of the Gifted:

Terman’s goal was to disprove the then-current belief that gifted children were sickly, socially inept, and not well-rounded.

This belief was especially popular in a little nation known as Germany, where it inspired people to take schoolchildren on long hikes in the woods to keep them fit and the mass-extermination of Jews, who were believed to be muddying the German genepool with their weak, sickly, high-IQ genes (and nefariously trying to marry strong, healthy German in order to replenish their own defective stock.) It didn’t help that German Jews were both high-IQ and beset by a number of illnesses (probably related to high rates of consanguinity,) but then again, the Gypsies are beset by even more debilitating illnesses, but no one blames this on all of the fresh air and exercise afforded by their highly mobile lifestyles.

(Just to be thorough, though, the Nazis also exterminated the Gypsies and Hans Asperger’s subjects, despite Asperger’s insistence that they were very clever children who could probably be of great use to the German war effort via code breaking and the like.)

The results of Terman’s study are strongly in Nick’s favor. According to Psychology Today’s  account:

His final group of “Termites” averaged a whopping IQ of 151. Following-up his group 35-years later, his gifted group at mid-life definitely seemed to conform to his expectations. They were taller, healthier, physically better developed, and socially adept (dispelling the myth at the time of high-IQ awkward nerds).

According to Wikipedia:

…the first volume of the study reported data on the children’s family,[17] educational progress,[18] special abilities,[19] interests,[20] play,[21] and personality.[22] He also examined the children’s racial and ethnic heritage.[23] Terman was a proponent of eugenics, although not as radical as many of his contemporary social Darwinists, and believed that intelligence testing could be used as a positive tool to shape society.[3]

Based on data collected in 1921–22, Terman concluded that gifted children suffered no more health problems than normal for their age, save a little more myopia than average. He also found that the children were usually social, were well-adjusted, did better in school, and were even taller than average.[24] A follow-up performed in 1923–1924 found that the children had maintained their high IQs and were still above average overall as a group.

Of course, we can go back even further than Terman–in the early 1800s, allergies like hay fever were associated with the nobility, who of course did not do much vigorous work in the fields.

My impression, based on studies I’ve seen previously, is that athleticism and IQ are positively correlated. That is, smarter people tend to be more athletic, and more athletic people tend to be smarter. There’s a very obvious reason for this: our brains are part of our bodies, people with healthier bodies therefore also have healthier brains, and healthier brains tend to work better.

At the very bottom of the IQ distribution, mentally retarded people tend to also be clumsy, flacid, or lacking good muscle tone. The same genes (or environmental conditions) that make children have terrible health/developmental problems often also affect their brain growth, and conditions that affect their brains also affect their bodies. As we progress from low to average to above-average IQ, we encounter increasingly healthy people.

In most smart people, high-IQ doesn’t seem to be a random fluke, a genetic error, nor fitness reducing: in a genetic study of children with exceptionally high IQs, researchers failed to find many genes that specifically endowed the children with genius, but found instead a fortuitous absence of deleterious genes that knock a few points off the rest of us. The same genes that have a negative effect on the nerves and proteins in your brain probably also have a deleterious effect on the nerves and proteins throughout the rest of your body.

And indeed, there are many studies which show a correlation between intelligence and strength (eg, Longitudinal and Cross-Sectional Assessments of Age Changes in Physical Strength as Related to Sex, Social Class, and Mental Ability) or intelligence and overall health/not dying (eg, Intelligence in young adulthood and cause-specific mortality in the Danish Conscription Database (pdf) and The effects of occupation-based social position on mortality in a large American cohort.)

On the other hand, the evolutionary standard for “fitness” isn’t strength or longevity, but reproduction, and on this scale the high-IQ don’t seem to do as well:

Smart teens don’t have sex (or kiss much either): (h/t Gene Expresion)

Controlling for age, physical maturity, and mother’s education, a significant curvilinear relationship between intelligence and coital status was demonstrated; adolescents at the upper and lower ends of the intelligence distribution were less likely to have sex. Higher intelligence was also associated with postponement of the initiation of the full range of partnered sexual activities. … Higher intelligence operates as a protective factor against early sexual activity during adolescence, and lower intelligence, to a point, is a risk factor.

Source

Here we see the issue plainly: males at 120 and 130 IQ are less likely to get laid than clinically retarded men in 70s and 60s. The right side of the graph are “nerds”, the left side, “jocks.” Of course, the high-IQ females are even less likely to get laid than the high-IQ males, but males tend to judge themselves against other men, not women, when it comes to dating success. Since the low-IQ females are much less likely to get laid than the low-IQ males, this implies that most of these “popular” guys are dating girls who are smarter than themselves–a fact not lost on the nerds, who would also like to date those girls.

 In 2001, the MIT/Wellesley magazine Counterpart (Wellesley is MIT’s “sister school” and the two campuses allow cross-enrollment in each other’s courses) published a sex survey that provides a more detailed picture of nerd virginity:

I’m guessing that computer scientists invented polyamory, and neuroscientists are the chads of STEM. The results are otherwise pretty predictable.

Unfortunately, Counterpoint appears to be defunct due to lack of funding/interest and I can no longer find the original survey, but here is Jason Malloy’s summary from Gene Expression:

By the age of 19, 80% of US males and 75% of women have lost their virginity, and 87% of college students have had sex. But this number appears to be much lower at elite (i.e. more intelligent) colleges. According to the article, only 56% of Princeton undergraduates have had intercourse. At Harvard 59% of the undergraduates are non-virgins, and at MIT, only a slight majority, 51%, have had intercourse. Further, only 65% of MIT graduate students have had sex.

The student surveys at MIT and Wellesley also compared virginity by academic major. The chart for Wellesley displayed below shows that 0% of studio art majors were virgins, but 72% of biology majors were virgins, and 83% of biochem and math majors were virgins! Similarly, at MIT 20% of ‘humanities’ majors were virgins, but 73% of biology majors. (Apparently those most likely to read Darwin are also the least Darwinian!)

College Confidential has one paragraph from the study:

How Rolling Stone-ish are the few lucky souls who are doing the horizontal mambo? Well, not very. Considering all the non-virgins on campus, 41% of Wellesley and 32% of MIT students have only had one partner (figure 5). It seems that many Wellesley and MIT students are comfortingly monogamous. Only 9% of those who have gotten it on at MIT have been with more than 10 people and the number is 7% at Wellesley.

Someone needs to find the original study and PUT IT BACK ON THE INTERNET.

But this lack of early sexual success seems to translate into long-term marital happiness, once nerds find “the one.”Lex Fridman’s Divorce Rates by Profession offers a thorough list. The average divorce rate was 16.35%, with a high of 43% (Dancers) and a low of 0% (“Media and communication equipment workers.”)

I’m not sure exactly what all of these jobs are nor exactly which ones should count as STEM (veterinarian? anthropologists?) nor do I know how many people are employed in each field, but I count 49 STEM professions that have lower than average divorce rates (including computer scientists, economists, mathematical science, statisticians, engineers, biologists, chemists, aerospace engineers, astronomers and physicists, physicians, and nuclear engineers,) and only 23 with higher than average divorce rates (including electricians, water treatment plant operators, radio and telecommunication installers, broadcast engineers, and similar professions.) The purer sciences obviously had lower rates than the more practical applied tech fields.

The big outliers were mathematicians (19.15%), psychologists (19.26%), and sociologists (23.53%), though I’m not sure they count (if so, there were only 22 professions with higher than average divorce rates.)

I’m not sure which professions count as “jock” or “chad,” but athletes had lower than average rates of divorce (14.05%) as did firefighters, soldiers, and farmers. Financial examiners, hunters, and dancers, (presumably an athletic female occupation) however, had very high rates of divorce.

Medical Daily has an article on Who is Most Likely to Cheat? The Top 9 Jobs Unfaithful People Have (according to survey):

According to the survey recently taken by the “infidelity dating website,” Victoria Milan, individuals working in the finance field, such as brokers, bankers, and analysts, are more likely to cheat than those in any other profession. However, following those in finance comes those in the aviation field, healthcare, business, and sports.

With the exception of healthcare and maybe aviation, these are pretty typical Chad occupations, not STEM.

The Mirror has a similar list of jobs where people are most and least likely to be married. Most likely: Dentist, Chief Executive, Sales Engineer, Physician, Podiatrist, Optometrist, Farm product buyer, Precision grinder, Religious worker, Tool and die maker.

Least likely: Paper-hanger, Drilling machine operator, Knitter textile operator, Forge operator, Mail handler, Science technician, Practical nurse, Social welfare clerk, Winding machine operative, Postal clerk.

I struggled to find data on male fertility by profession/education/IQ, but there’s plenty on female fertility, eg the deceptively titled High-Fliers have more Babies:

…American women without any form of high-school diploma have a fertility rate of 2.24 children. Among women with a high-school diploma the fertility rate falls to 2.09 and for women with some form of college education it drops to 1.78.

However, among women with college degrees, the economists found the fertility rate rises to 1.88 and among women with advanced degrees to 1.96. In 1980 women who had studied for 16 years or more had a fertility rate of just 1.2.

As the economists prosaically explain: “The relationship between fertility and women’s education in the US has recently become U-shaped.”

Here is another article about the difference in fertility rates between high and low-IQ women.

But female fertility and male fertility may not be the same–I recall data elsewhere indicating that high-IQ men have more children than low IQ men, which implies those men are having their children with low-IQ women. (For example, while Bill and Hillary seem about matched on IQ, and have only one child, Melania Trump does not seem as intelligent as Trump, who has five children.)

Amusingly, I did find data on fertility rate by father’s profession for 1920, in the Birth Statistics for the Birth Registration Area of the US:

Of the 1,508,874 children born in 1920 in the birth registration area of the United states, occupations of fathers are stated for … 96.9%… The average number of children ever born to the present wives of these occupied fathers is 3.3 and the average number of children living 2.9.

The average number of children ever born ranges from 4.6 for foremen, overseers, and inspectors engaged in the extraction of minerals to 1.8 for soldiers, sailors, and marines. Both of these extreme averages are easily explained, for soldier, sailors and marines are usually young, while such foremen, overseers, and inspectors are usually in middle life. For many occupations, however, the ages of the fathers are presumably about the same and differences shown indicate real differences in the size of families. For example, the low figure for dentists, (2), architects, (2.1), and artists, sculptors, and teachers of art (2.2) are in striking contrast with the figure for mine operatives (4.3), quarry operatives (4.1) bootblacks, and brick and stone masons (each 3.9). …

As a rule the occupations credited with the highest number of children born are also credited with the highest number of children living, the highest number of children living appearing for foremen, overseers, and inspectors engaged in the extraction of minerals (3.9) and for steam and street railroad foremen and overseer (3.8), while if we exclude groups plainly affected by the age of fathers, the highest number of children living appear for mine and quarry operatives (each 3.6).

Obviously the job market was very different in 1920–no one was majoring in computer science. Perhaps some of those folks who became mine and quarry operatives back then would become engineers today–or perhaps not. Here are the average numbers of surviving children for the most obviously STEM professions (remember average for 1920 was 2.9):

Electricians 2.1, Electrotypers 2.2, telegraph operator 2.2, actors 1.9, chemists 1.8, Inventors 1.8, photographers and physicians 2.1, technical engineers 1.9, veterinarians 2.2.

I don’t know what paper hangers do, but the Mirror said they were among the least likely to be married, and in 1920, they had an average of 3.1 children–above average.

What about athletes? How smart are they?

Athletes Show Huge Gaps on SAT Scores” is not a promising title for the “nerds are athletic” crew.

The Journal-Constitution studied 54 public universities, “including the members of the six major Bowl Championship Series conferences and other schools whose teams finished the 2007-08 season ranked among the football or men’s basketball top 25.”…

  • Football players average 220 points lower on the SAT than their classmates. Men’s basketball was 227 points lower.
  • University of Florida won the prize for biggest gap between football players and the student body, with players scoring 346 points lower than their peers.
  • Georgia Tech had the nation’s best average SAT score for football players, 1028 of a possible 1600, and best average high school GPA, 3.39 of a possible 4.0. But because its student body is apparently very smart, Tech’s football players still scored 315 SAT points lower than their classmates.
  • UCLA, which has won more NCAA championships in all sports than any other school, had the biggest gap between the average SAT scores of athletes in all sports and its overall student body, at 247 points.

From the original article, which no longer seems to be up on the Journal-Constitution website:

All 53 schools for which football SAT scores were available had at least an 88-point gap between team members’ average score and the average for the student body. …

Football players performed 115 points worse on the SAT than male athletes in other sports.

The differences between athletes’ and non-athletes’ SAT scores were less than half as big for women (73 points) as for men (170).

Many schools routinely used a special admissions process to admit athletes who did not meet the normal entrance requirements. … At Georgia, for instance, 73.5 percent of athletes were special admits compared with 6.6 percent of the student body as a whole.

On the other hand, as Discover Magazine discusses in “The Brain: Why Athletes are Geniuses,” athletic tasks–like catching a fly ball or slapping a hockey puck–require exceptionally fast and accurate brain signals to trigger the correct muscle movements.

Ryan Stegal studied the GPAs of highschool student athletes vs. non-athletes and found that the athletes had higher average GPAs than the non-athletes, but he also notes that the athletes were required to meet certain minimum GPA requirements in order to play.

But within athletics, it looks like the smarter athletes perform better than dumber ones, which is why the NFL uses the Wonderlic Intelligence Test:

NFL draft picks have taken the Wonderlic test for years because team owners need to know if their million dollar player has the cognitive skills to be a star on the field.

What does the NFL know about hiring that most companies don’t? They know that regardless of the position, proof of intelligence plays a profound role in the success of every individual on the team. It’s not enough to have physical ability. The coaches understand that players have to be smart and think quickly to succeed on the field, and the closer they are to the ball the smarter they need to be. That’s why, every potential draft pick takes the Wonderlic Personnel Test at the combine to prove he does–or doesn’t—have the brains to win the game. …

The first use of the WPT in the NFL was by Tom Landry of the Dallas Cowboys in the early 70s, who took a scientific approach to finding players. He believed players who could use their minds where it counted had a strategic advantage over the other teams. He was right, and the test has been used at the combine ever since.

For the NFL, years of testing shows that the higher a player scores on the Wonderlic, the more likely he is to be in the starting lineup—for any position. “There is no other reasonable explanation for the difference in test scores between starting players and those that sit on the bench,” Callans says. “Intelligence plays a role in how well they play the game.”

Let’s look at Exercising Intelligence: How Research Shows a Link Between Physical Activity and Smarts:

A large study conducted at the Sahlgrenska Academy and Sahlgrenska University Hospital in Gothenburg, Sweden, reveals that young adults who regularly exercise have higher IQ scores and are more likely to go on to university.

The study was published in the Proceedings of the National Academy of Sciences (PNAS), and involved more than 1.2 million Swedish men. The men were performing military service and were born between the years 1950 and 1976. Both their physical and IQ test scores were reviewed by the research team. …

The researchers also looked at data for twins and determined that primarily environmental factors are responsible for the association between IQ and fitness, and not genetic makeup. “We have also shown that those youngsters who improve their physical fitness between the ages of 15 and 18 increase their cognitive performance.”…

I have seen similar studies before, some involving mice and some, IIRC, the elderly. It appears that exercise is probably good for you.

I have a few more studies I’d like to mention quickly before moving on to discussion.

Here’s Grip Strength and Physical Demand of Previous Occupation in a Well-Functioning Cohort of Chinese Older Adults (h/t prius_1995) found that participants who had previously worked in construction had greater grip strength than former office workers.

Age and Gender-Specific Normative Data of Grip and Pinch Strength in a Healthy Adult Swiss Population (h/t prius_1995).

 

If the nerds are in the sedentary cohort, then they be just as athletic if not more athletic than all of the other cohorts except the heavy work.

However, in Revised normative values for grip strength with the Jamar dynamometer, the authors found no effect of profession on grip strength.

And Isometric muscle strength and anthropometric characteristics of a Chinese sample (h/t prius_1995).

And Pumpkin Person has an interesting post about brain size vs. body size.

 

Discussion: Are nerds real?

Overall, it looks like smarter people are more athletic, more athletic people are smarter, smarter athletes are better athletes, and exercise may make you smarter. For most people, the nerd/jock dichotomy is wrong.

However, there is very little overlap at the very highest end of the athletic and intelligence curves–most college (and thus professional) athletes are less intelligent than the average college student, and most college students are less athletic than the average college (and professional) athlete.

Additionally, while people with STEM degrees make excellent spouses (except for mathematicians, apparently,) their reproductive success is below average: they have sex later than their peers and, as far as the data I’ve been able to find shows, have fewer children.

Stephen Hawking

Even if there is a large overlap between smart people and athletes, they are still separate categories selecting for different things: a cripple can still be a genius, but can’t play football; a dumb person can play sports, but not do well at math. Stephen Hawking can barely move, but he’s still one of the smartest people in the world. So the set of all smart people will always include more “stereotypical nerds” than the set of all athletes, and the set of all athletes will always include more “stereotypical jocks” than the set of all smart people.

In my experience, nerds aren’t socially awkward (aside from their shyness around women.) The myth that they are stems from the fact that they have different interests and communicate in a different way than non-nerds. Let nerds talk to other nerds, and they are perfectly normal, communicative, socially functional people. Put them in a room full of non-nerds, and suddenly the nerds are “awkward.”

Unfortunately, the vast majority of people are not nerds, so many nerds have to spend the majority of their time in the company of lots of people who are very different than themselves. By contrast, very few people of normal IQ and interests ever have to spend time surrounded by the very small population of nerds. If you did put them in a room full of nerds, however, you’d find that suddenly they don’t fit in. The perception that nerds are socially awkward is therefore just normie bias.

Why did the nerd/jock dichotomy become so popular in the 70s? Probably in part because science and technology were really taking off as fields normal people could aspire to major in, man had just landed on the moon and the Intel 4004 was released in 1971.  Very few people went to college or were employed in sciences back in 1920; by 1970, colleges were everywhere and science was booming.

And at the same time, colleges and highschools were ramping up their athletics programs. I’d wager that the average school in the 1800s had neither PE nor athletics of any sort. To find those, you’d probably have to attend private academies like Andover or Exeter. By the 70s, though, schools were taking their athletics programs–even athletic recruitment–seriously.

How strong you felt the dichotomy probably depends on the nature of your school. I have attended schools where all of the students were fairly smart and there was no anti-nerd sentiment, and I have attended schools where my classmates were fiercely anti-nerd and made sure I knew it.

But the dichotomy predates the terminology. Take Superman, first 1938. His disguise is a pair of glasses, because no one can believe that the bookish, mild-mannered, Clark Kent is actually the super-strong Superman. Batman is based on the character of El Zorro, created in 1919. Zorro is an effete, weak, foppish nobleman by day and a dashing, sword-fighting hero of the poor by night. Of course these characters are both smart and athletic, but their disguises only work because others do not expect them to be. As fantasies, the characters are powerful because they provide a vehicle for our own desires: for our everyday normal failings to be just a cover for how secretly amazing we are.

But for the most part, most smart people are perfectly fit, healthy, and coordinated–even the ones who like math.

 

Navigation and the Wealth of Nations

Global Determinants of Navigational Ability, by Coutrot et al:

Using a mobile-based virtual reality navigation task, we measured spatial navigation ability in more than 2.5 million people globally. Using a clustering approach, we find that navigation ability is not smoothly distributed globally but clustered into five distinct yet geographically related groups of countries. Furthermore, the economic wealth of a nation (Gross Domestic Product per capita) was predictive of the average navigation ability of its inhabitants and gender inequality (Gender Gap Index) was predictive of the size of performance difference between males and females. Thus, cognitive abilities, at least for spatial navigation, are clustered according to economic wealth and gender inequalities globally.

This is an incredible study. They got 2.5 million people from all over the world to participate.

If you’ve been following any of the myriad debates about intelligence, IQ, and education, you’re probably familiar with the concept of “multiple intelligences” and the fact that there’s rather little evidence that people actually have “different intelligences” that operate separately from each other. In general, it looks like people who have brains that are good at working out how to do one kind of task tend to be good at working out other sorts of tasks.

I’ve long held navigational ability as a possible exception to this: perhaps people in, say, Polynesian societies depended historically far more on navigational abilities than the rest of us, even though math and literacy were nearly absent.

Unfortunately, it doesn’t look like the authors got enough samples from Polynesia to include it in the study, but they did get data from Indonesia and the Philippines, which I’ll return to in a moment.

Frankly, I don’t see what the authors mean by “five distinct yet geographically related groups of countries.” South Korea is ranked between the UK and Belgium; Russia is next to Malaysia; Indonesia is next to Portugal and Hungary.

GDP per capita appears to be a stronger predictor than geography:

Some people will say these results merely reflect experience playing video games–people in wealthier countries have probably spent more time and money on computers and games. But assuming that the people who are participating in the study in the first place are people who have access to smartphones, computers, video games, etc., the results are not good for the multiple-intelligences hypothesis.

In the GDP per Capita vs. Conditional Modes (ie how well a nation scored overall, with low scores better than high scores) graph, countries above the trend line are under-performing relative to their GDPs, and countries below the line are over-performing relative to their GDPs.

South Africa, for example, significantly over-performs relative to its GDP, probably due to sampling bias: white South Africans with smartphones and computers were probably more likely to participate in the study than the nation’s 90% black population, but the GDP reflects the entire population. Finland and New Zealand are also under-performing economically, perhaps because Finland is really cold and NZ is isolated.

On the other side of the line, the UAE, Saudi Arabia, and Greece over-perform relative to GDP. Two of these are oil states that would be much poorer if not for geographic chance, and as far as I can tell, the whole Greek economy is being propped up by German loans. (There is also evidence that Greek IQ is falling, though this may be a near universal problem in developed nations.)

Three other nations stand out in the “scoring better than GDP predicts” category: Ukraine, (which suffered under Communism–Communism seems to do bad things to countries,) Indonesia and the Philippines. While we could be looking at selection bias similar to South Africa, these are island nations in which navigational ability surely had some historical effect on people’s ability to survive.

Indonesia and the Philippines still didn’t do as well as first-world nations like Norway and Canada, but they outperformed other nations with similar GDPs like Egypt, India, and Macedonia. This is the best evidence I know of for independent selection for navigational ability in some populations.

The study’s other interesting findings were that women performed consistently worse than men, both across countries and age groups (except for the post-90 cohort, but that might just be an error in the data.) Navigational ability declines steeply for everyone post-23 years old until about 75 years; the authors suggest the subsequent increase in abilities post-70s might be sampling error due to old people who are good at video games being disproportionately likely to seek out video game related challenges.

The authors note that people who drive more (eg, the US and Canada) might do better on navigational tasks than people who use public transportation more (eg, Europeans) but also that Finno-Scandians are among the world’s best navigators despite heavy use of public transport in those countries. The authors write:

We speculate that this specificity may be linked to Nordic countries sharing a culture of participating in a sport related to navigation: orienteering. Invented as an official sport in the late 19th century in Sweden, the first orienteering competition open to the public was held in Norway in 1897. Since then, it has been more popular in Nordic countries than anywhere else in the world, and is taught in many schools [26]. We found that ‘orienteering world championship’ country results significantly correlated with countries’ CM (Pearson’s correlation ρ = .55, p = .01), even after correcting for GDP per capita (see Extended Data Fig. 15). Future targeted research will be required to evaluate the impact of cultural activities on navigation skill.

I suggest a different causal relationship: people make hobbies out of things they’re already good at and enjoy doing, rather than things they’re bad at.

 

 

Please note that the study doesn’t look at a big chunk of countries, like most of Africa. Being at the bottom in navigational abilities in this study by no means indicates that a country is at the bottom globally–given the trends already present in the data, it is likely that the poorer countries that weren’t included in the study would do even worse.

Politics are Getting Dumber

You don’t need to watch the video. I haven’t watched the video. I’m only highlighting it because it starts with a moronic question.

Meanwhile, in the social justice warriors vs inanimate objects department:

Kick that statue! Yeah! You show that big chunk of metal who’s boss!

And in inanimate objects vs. inanimate objects:

CNN is impressed by the fact that statues (normally) don’t move.

This one is stupid on several levels–the statue itself, erected by a male-dominated industry to celebrate “female empowerment,” infantilizes women by symbolically depicting them as a small, stupid child who doesn’t know enough to get out of the way of a charging bull.

You know, I could keep posting examples of stupidity all day.

Mob mentality is never good, but it seems like political discourse is getting progressively stupider.

It takes a certain level of intelligence to do two critical things:

  1. Understand and calmly discuss other people’s opinions even when you disagree with them
  2. Realize that cooperating in the prisoner’s dilemma is long-term better than defecting, even if you don’t like the people you’re cooperating with

Traditional “liberalism”* was a kind of meta-political technology for allowing different groups of people to live in one country without killing each other. Freedom of Religion, for example, became an agreed-upon principle after centuries of religious violence in Europe. If the state is going to promote a particular religion and outlaw others, then it’s in every religious person’s interest to try to take over the state and make sure it enforces their religion. If the state stays (ostensibly) neutral, then no one can commandeer it to murder their religious enemies.

*”Liberal” has in recent years become an almost empty anachronism, but I hope its meaning is clear in the historical context of 1787.

Freedom of Speech, necessary for people to make informed decisions, has recently come under attack for political reasons. Take the thousands of protestors who showed up to an anti-Free Speech rally in Boston on Sat, August 19th.

The Doublespeak is Strong with this One

Of course no one likes letting their enemies speak, but everyone is someone else’s enemy. Virtually every historical atrocity was committed by people convinced that they were right and merely opposing evil, despicable people. Respecting free speech does not require liking other people’s arguments. It requires understanding that if you start punching Nazis, Nazis will punch you back, and soon everyone will be screaming “Nazi!” while punching random people.

Edit: apparently one article I linked to was a hoax. Hard to tell sometimes.

Now, Free Speech has often been honored more as an ideal than a reality. When people are out of power, they tend to defend the ideal rather strongly; when in power, they suddenly seem to lose interest in it. But most people interested in politics still seemed to have some general sense that even if they hated that other guy’s guts, it might be a bad idea to unleash mob violence on him.

In general, principles like free speech and freedom of religion let different people–and different communities of people–run their own lives and communities as they see fit, without coming into direct conflict with each other, while still getting to enjoy the national security and trade benefits of living in a large country. The Amish get to be Amish, Vermonters get to live free or die, and Coloradans get to eat pot brownies.

But that requires being smart enough to understand that to keep a nation of over 300 million people together, you have to live and let live–and occasionally hold your nose and put up with people you hate.

These days, politics just seems like it’s getting a lot dumber:

Cat that nearly died after being attacked by a thug “because he looks like Hitler” has now recovered despite losing an eye.