Do Sufficiently Large Organizations Start Acting Like Malevolent AIs? (pt 2)

(Part 1 is here)

As we were discussing on Monday, as our networks have become more effective, our ability to incorporate new information may have actually gone down. Ironically, as we add more people to a group–beyond a certain limit–it becomes more difficult for individuals with particular expertise to convince everyone else in the group that the group’s majority consensus is wrong.

The difficulties large groups experience trying to coordinate and share information force them to become dominated by procedures–set rules of behavior and operation are necessary for large groups to operate. A group of three people can use ad-hoc consensus and rock-paper-scissors to make decisions; a nation of 320 million requires a complex body of laws and regulations. (I once tried to figure out just how many laws and regulations America has. The answer I found was that no one knows.)

An organization is initially founded to accomplish some purpose that benefits its founders–generally to make them well-off, but often also to produce some useful good or service. A small organization is lean, efficient, and generally exemplifies the ideals put forth in Adam Smith’s invisible hand:

It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest. We address ourselves, not to their humanity but to their self-love, and never talk to them of our necessities but of their advantages. —The Wealth Of Nations, Book I

As an organization ages and grows, its founders retire or move on, it becomes more dependent on policies and regulations and each individual employee finds his own incentives further displaced from the company’s original intentions. Soon a company is no longer devoted to either the well-being of its founders or its customers, but to the company itself. (And that’s kind of a best-case scenario in which the company doesn’t just disintegrate into individual self-interest.)

I am reminded of a story about a computer that had been programmed to play Tetris–actually, it had been programmed not to lose at Tetris. So the computer paused the game. A paused game cannot lose.

What percentage of employees (especially management) have been incentivized to win? And what percentage are being incentivized to not lose?

And no, I don’t mean that in some 80s buzzword-esque way. Most employees have more to lose (ie, their jobs) if something goes wrong as a result of their actions than to gain if something goes right. The stockholders might hope that employees are doing everything they can to maximize profits, but really, most people are trying not to mess up and get fired.

Fear of messing up goes beyond the individual scale. Whole companies are goaded by concerns about risk–“Could we get sued?” Large corporation have entire legal teams devoted to telling them how they could get sued for whatever their doing and to filing lawsuits against their competitors for whatever they’re doing.

This fear of risk carries over, in turn, to government regulations. As John Sanphillipo writes in City Regulatory Hurdles Favor Big Developers, not the Little Guy:

A family in a town I visited bought an old fire station a few years ago with the intention of turning it into a Portuguese bakery and brewpub. They thought they’d have to retrofit the interior of the building to meet health and safety standards for such an establishment.

Turns out the cost of bringing the landscape around the outside of the building up to code was their primary impediment. Mandatory parking requirements, sidewalks, curb cuts, fire lanes, on-site stormwater management, handicapped accessibility, drought-tolerant native plantings…it’s a very long list that totaled $340,000 worth of work. … Guess what? They decided not to open the bakery or brewery. …

Individually it’s impossible to argue against each of the particulars. Do you really want to deprive people in wheelchairs of the basic civil right of public accommodation? Do you really want the place to catch fire and burn? Do you want a barren landscape that’s bereft of vegetation? …

I was in Hamtramck, Michigan a couple of years ago to participate in a seminar about reactivating neighborhoods through incremental small-scale development. …

While the event was underway the fire marshal happened to drive by and noticed there were people—a few dozen actual humans—occupying a commercial building in broad daylight. In a town that has seen decades of depopulation and disinvestment, this was an odd sight. And he was worried. Do people have permission for this kind of activity? Had there been an inspection? Was a permit issued? Is everything insured? He called one of his superiors to see if he should shut things down in the name of public safety.

It’s a good article. You should read the whole thing.

Back in Phillipe Bourgeois’s In Search of Respect: Selling Crack in el Barrio, Phillipe describes one drug dealer’s attempt to use the money he’d made to go into honest business by opening a convenience store. Unfortunately, he couldn’t get the store complaint with NYC disability-access regulations, and so the store never opened and the owner went back to dealing drugs. (What IQ, I wonder, is necessary to comply with all of these laws and regulations in the first place?)

Now, I’m definitely in favor of disabled people being able to buy groceries and use bathrooms. But what benefits a disabled person more: a convenience store that’s not fully wheel-chair accessible, or a crack house?

In My IRB Nightmare, Scott Alexander writes about trying to do a simple study to determine whether the screening test already being used to diagnose people with bipolar disorder is effective at diagnosing them:

When we got patients, I would give them the bipolar screening exam and record the results. Then Dr. W. would conduct a full clinical interview and formally assess them. We’d compare notes and see how often the screening test results matched Dr. W’s expert diagnosis.

Remember, they were already using the screening test on patients and then having them talk to the doctor for a formal assessment. The only thing the study added was that Scott would compare how well the screening results matched the formal assessment. No patients would be injected, subject to new procedures, or even asked different questions. They just wanted to compare two data sets.

After absurd quantities of paperwork and an approval process much too long to summarize here, the project got audited:

I kept the audit report as a souvenier. I have it in front of me now. Here’s an example infraction:

The data and safety monitoring plan consists of ‘the Principal Investigator will randomly check data integrity’. This is a prospective study with a vulnerable group (mental illness, likely to have diminished capacity, likely to be low income) and, as such, would warrant a more rigorous monitoring plan than what is stated above. In addition to the above, a more adequate plan for this study would also include review of the protocol at regular intervals, on-going checking of any participant complaints or difficulties with the study, monitoring that the approved data variables are the only ones being collected, regular study team meetings to discuss progress and any deviations or unexpected problems. Team meetings help to assure participant protections, adherence to the protocol. Having an adequate monitoring plan is a federal requirement for the approval of a study. See Regulation 45 CFR 46.111 Criteria For IRB Approval Of Research. IRB Policy: PI Qualifications And Responsibility In Conducting Research. Please revise the protocol via a protocol revision request form. Recommend that periodic meetings with the research team occur and be documented.

… Faced with submitting twenty-seven new pieces of paperwork to correct our twenty-seven infractions, Dr. W and I gave up. We shredded the patient data and the Secret Code Log. We told all the newbies they could give up and go home. … We told the IRB that they had won, fair and square; we surrendered unconditionally.

The point of all that paperwork and supervision is to make sure that no one replicates the Tuskegee Syphilis Experiment nor the Nazi anything. Noble sentiments–but as a result, a study comparing two data sets had to be canceled.

I’ve noticed recently that much of the interesting medical research is happening in the third world/China–places where the regulations aren’t as strong and experiments (of questionable ethics or not) can actually get done.

Like the computer taught not to lose at Tetris, all of these systems are more focused on minimizing risk–even non-existent risk–than on actually succeeding.

In his review of Yudkowsky’s Inadequate Equilibria, Scott writes:

…[Yudkowsky] continues to the case of infant parenteral nutrition. Some babies have malformed digestive systems and need to have nutrient fluid pumped directly into their veins. The nutrient fluid formula used in the US has the wrong kinds of lipids in it, and about a third of babies who get it die of brain or liver damage. We’ve known for decades that the nutrient fluid formula has the wrong kind of lipids. We know the right kind of lipids and they’re incredibly cheap and there is no reason at all that we couldn’t put them in the nutrient fluid formula. We’ve done a bunch of studies showing that when babies get the right nutrient fluid formula, the 33% death rate disappears. But the only FDA-approved nutrient fluid formula is the one with the wrong lipids, so we just keep giving it to babies, and they just keep dying. Grant that the FDA is terrible and ruins everything, but over several decades of knowing about this problem and watching the dead babies pile up, shouldn’t somebody have done something to make this system work better?

The doctors have to use the FDA-approved formula or they could get sued for malpractice. The insurance companies, of course, only cover the FDA-approved formula. The formula makers are already making money selling the current formula and would probably have to go through an expensive, multi-year review system (with experiments far more regulated than Scott’s) to get the new formula approved, and even then they might not actually get approval. In short, on one side are people in official positions of power whose lives could be made worse (or less convenient) if they tried to fix the problem, and on the other side are dead babies who can’t stand up for themselves.

The Chankiri Tree (Killing Tree) where infants were fatally smashed, Choeung Ek, Cambodia.

Communism strikes me as the ultimate expression of this beast: a society fully transformed into a malevolent AI. It’s impossible to determine exactly how many people were murdered by communism, but the Black Book of Communism estimates a death toll between 85 and 100 million people.

Capitalism, for all its faults, is at least somewhat decentralized. If you make a bad business decision, you suffer the consequences and can hopefully learn from your mistakes and make better decisions in the future. But in communist systems, one central planner’s bad decisions can cause suffering for millions of other people, resulting in mass death. Meanwhile, the central planner may suffer for correcting the bad decision. Centralized economies simply lack the feedback loops necessary to fix problems before they start killing people.

While FDA oversight of medicines is probably important, would it be such a bad thing if a slightly freer market in parenteral nutrition allowed parents to chose between competing brands of formula, each promising not to kill your baby?

Of course, capitalism isn’t perfect, either. SpottedToad recently had an interesting post, 2010s Identity Politics as Hostile AI:

There’s an interesting post mortem on the rise and fall of the clickbait liberalism site Mic.com, that attracted an alleged 65 million unique visitors on the strength of Woketastic personal stories like “5 Powerful Reasons I’m a (Male) Feminist,” …

Every time Mic had a hit, it would distill that success into a formula and then replicate it until it was dead. Successful “frameworks,” or headlines, that went through this process included “Science Proves TK,” “In One Perfect Tweet TK,” “TK Reveals the One Brutal Truth About TK,” and “TK Celebrity Just Said TK Thing About TK Issue. Here’s why that’s important.” At one point, according to an early staffer who has since left, news writers had to follow a formula with bolded sections, which ensured their stories didn’t leave readers with any questions: The intro. The problem. The context. The takeaway.

…But the success of Mic.com was due to algorithms built on top of algorithms. Facebook targets which links are visible to users based on complex and opaque rules, so it wasn’t just the character of the 2010s American population that was receptive to Mic.com’s specific brand of SJW outrage clickbait, but Facebook’s rules for which articles to share with which users and when. These rules, in turn, are calibrated to keep users engaged in Facebook as much as possible and provide the largest and most receptive audience for its advertisers, as befits a modern tech giant in a two-sided market.

Professor Bruce Charlton has a post about Head Girl Syndrome–the Opposite of Creative Genius that is good and short enough that I wish I could quote the whole thing. A piece must suffice:

The ideal Head Girl is an all-rounder: performs extremely well in all school subjects and has a very high Grade Point Average. She is excellent at sports, Captaining all the major teams. She is also pretty, popular, sociable and well-behaved.

The Head Girl will probably be a big success in life, in whatever terms being a big success happens to be framed …

But the Head Girl is not, cannot be, a creative genius. …

The more selective the social system, the more it will tend to privilege the Head Girl and eliminate the creative genius.

Committees, peer review processes, voting – anything which requires interpersonal agreement and consensus – will favour the Head Girl and exclude the creative genius.  …

*

We live in a Head Girl’s world – which is also a world where creative genius is marginalized and disempowered to the point of near-complete invisibility.

The quest for social status is, I suspect, one of the things driving the system. Status-oriented people refuse to accept information that comes from people lower status than themselves, which renders system feedback even more difficult. The internet as a medium of information sharing is beautiful; the internet as a medium of status signalling is horrible.

So what do you think? Do sufficiently large organization start acting like malevolent (or hostile) AIs?

(Back to Part 1)

Advertisements

Do Sufficiently Large Organizations Start Acting Like Malevolent AIs? (pt 1)

(and Society is an Extremely Large Organization)

What do I mean by malevolent AI?

AI typically refers to any kind of intelligence or ability to learn possessed by machines. Malevolent AI occurs when a machine pursues its programmed objectives in a way that humans find horrifying or immoral. For example, a machine programmed to make paperclips might decide that the easiest way to maximize paperclip production is to enslave humans to make paperclips for it. Superintelligent AI is AI that has figured out how to make itself smarter and thus keeps getting smarter and smarter. (Should we develop malevolent superintelligent AI, then we’ll really have something to worry about.)

Note: people who actually study AI probably have better definitions than I do.

While we like to think of ourselves (humans) as unique, thinking individuals, it’s clear that many of our ideas come from other people. Chances are good you didn’t think up washing your hands or brushing your teeth by yourself, but learned about them from your parents. Society puts quite a bit of effort, collectively speaking, into teaching children all of the things people have learned over the centuries–from heliocentrism to the fact that bleeding patients generally makes diseases worse, not better.

Just as we cannot understand the behavior of ants or bees simply by examining the anatomy of a single ant or single bee, but must look at the collective life of the entire colony/hive, so we cannot understand human behavior by merely examining a single human, but must look at the collective nature of human societies. “Man is a political animal,” whereby Aristotle did not mean that we are inherently inclined to fight over transgender bathrooms, but instinctively social:

Hence it is evident that the state is a creation of nature, and that man is by nature a political animal. And he who by nature and not by mere accident is without a state, is either above humanity, or below it; he is the ‘Tribeless, lawless, hearthless one,’ whom Homer denounces—the outcast who is a lover of war; he may be compared to a bird which flies alone.

Now the reason why man is more of a political animal than bees or any other gregarious animals is evident. Nature, as we often say, makes nothing in vain, and man is the only animal whom she has endowed with the gift of speech. And whereas mere sound is but an indication of pleasure or pain, and is therefore found in other animals (for their nature attains to the perception of pleasure and pain and the intimation of them to one another, and no further), the power of speech is intended to set forth the expedient and inexpedient, and likewise the just and the unjust. And it is a characteristic of man that he alone has any sense of good and evil, of just and unjust, and the association of living beings who have this sense makes a family and a state. –Aristotle, Politics

With very rare exceptions, humans–all humans, in all parts of the world–live in groups. Tribes. Families. Cities. Nations. Our nearest primate relatives, chimps and bonobos, also live in groups. Primates are social, and their behavior can only be understood in the context of their groups.

Groups of humans are able to operate in ways that individual humans cannot, drawing on the collective memories, skills, and knowledge of their members to create effects much greater than what could be achieved by each person acting alone. For example, one lone hunter might be able to kill a deer–or if he is extremely skilled, hardworking, and lucky, a dozen deer–but ten hunters working together can drive an entire herd of deer over a cliff, killing hundreds or even thousands. (You may balk at the idea, but many traditional hunting societies were dependent on only a few major hunts of migrating animals to provide the majority of their food for the entire year–meaning that those few hunts had to involve very high numbers of kills or else the entire tribe would starve while waiting for the animals to return.)

Chimps have never, to my knowledge, driven megafauna to extinction–but humans have a habit of doing so wherever they go. Humans are great at what we do, even if we aren’t always great at extrapolating long-term trends.

But the beneficial effects of human cooperation don’t necessarily continue to increase as groups grow larger–China’s 1.3 billion people don’t appear to have better lives than Iceland’s 332,000 people. Indeed, there probably is some optimal size–depending on activity and available communications technology–beyond which the group struggles to coordinate effectively and begins to degenerate.

CBS advises us to make groups of 7:

As it turns out, seven is a great number for not only forming an effective fictional fighting force, but also for task groups that use spreadsheets instead of swords to do their work.

That’s according to the new book Decide & Deliver: 5 Steps to Breakthrough Performance in Your Organization (Harvard Business Press).

Once you’ve got 7 people in a group, each additional member reduces decision effectiveness by 10%, say the authors, Marcia W. Blenko, Michael C. Mankins, and Paul Rogers.

Unsurprisingly, a group of 17 or more rarely makes a decision other than when to take a lunch break.

Princeton blog reports:

The trope that the likelihood of an accurate group decision increases with the abundance of brains involved might not hold up when a collective faces a variety of factors — as often happens in life and nature. Instead, Princeton University researchers report that smaller groups actually tend to make more accurate decisions, while larger assemblies may become excessively focused on only certain pieces of information. …

collective decision-making has rarely been tested under complex, “realistic” circumstances where information comes from multiple sources, the Princeton researchers report in the journal Proceedings of the Royal Society B. In these scenarios, crowd wisdom peaks early then becomes less accurate as more individuals become involved, explained senior author Iain Couzin, a professor of ecology and evolutionary biology. …

The researchers found that the communal ability to pool both pieces of information into a correct, or accurate, decision was highest in a band of five to 20. After that, the accurate decision increasingly eluded the expanding group.

Couzin found that in small groups, people with specialized knowledge could effectively communicate that to the rest of the group, whereas in larger groups, they simply couldn’t convey their knowledge to enough people and group decision-making became dominated by the things everyone knew.

If you could travel back in time and propose the idea of democracy to the inhabitants of 13th century England, they’d respond with incredulity: how could peasants in far-flung corners of the kingdom find out who was running for office? Who would count the votes? How many months would it take to tally up the results, determine who won, and get the news back to the outlying provinces? If you have a printing press, news–and speeches–can quickly and accurately spread across large distances and to large numbers of people, but prior to the press, large-scale democracy simply wasn’t practical.

Likewise, the communism of 1917 probably couldn’t have been enacted in 1776, simply because society at that time didn’t have the technology yet to gather all of the necessary data on crop production, factory output, etc. (As it was, neither did Russia of 1917, but they were closer.)

Today, the amount of information we can gather and share on a daily basis is astounding. I have at my fingertips the world’s greatest collection of human knowledge, an overwhelming torrent of data.

All of our these information networks have linked society together into an increasingly efficient meta-brain–unfortunately, it’s not a very smart meta-brain. Like the participants in Couzin’s experiments, we are limited to what “everyone knows,” stymied in our efforts to impart more specialized knowledge. (I don’t know about you, but I find being shouted down by a legion of angry people who know less about a subject than I do one of the particularly annoying features of the internet.)

For example, there’s been a lot of debate lately about immigration, but how much do any of us really know about immigrants or immigrant communities? How much of this debate is informed by actual knowledge of the people involved, and how much is just people trying to extend vague moral principles to cover novel situations? I recently had a conversation with a progressive acquaintance who justified mass-immigration on the grounds that she has friendly conversations with the cabbies in her city. Heavens protect us–I hope to get along with people as friends and neighbors, not just when I am paying them!

One gets the impression in conversation with Progressives that they regard Christian Conservatives as a real threat, because that group that can throw its weight around in elections or generally enforce cultural norms that liberals don’t like, but are completely oblivious to the immigrants’ beliefs. Most of our immigrants hail from countries that are rather more conservative than the US and definitely more conservative than our liberals.

Any sufficiently intelligent democracy ought to be able to think critically about the political opinions of the new voters it is awarding citizenship to, but we struggle with this. My Progressive acquaintance seems think that we can import an immense, conservative, third-world underclass and it will stay servile indefinitely, not vote its own interests or have any effects on social norms. (Or its interests will be, coincidentally, hers.)

This is largely an information problem–most Americans are familiar with our particular brand of Christian conservatives, but are unfamiliar with Mexican or Islamic ones.

How many Americans have intimate, detailed knowledge of any Islamic society? Very few of us who are not Muslim ourselves speak Arabic, and few Muslim countries are major tourist destinations. Aside from the immigrants themselves, soldiers, oil company employees, and a handful of others have spent time in Islamic countries, but that’s about it–and no one is making any particular effort to listen to their opinions. (It’s a bit sobering to realize that I know more about Islamic culture than 90% of Americans and I still don’t really know anything.)

So instead of making immigration policy based on actual knowledge of the groups involved, people try to extend the moral rules–heuristics–they already have. So people who believe that “religious tolerance is good,” because this rule has generally been useful in preventing conflict between American religious groups, think this rule should include Muslim immigrants. People who believe, “I like being around Christians,” also want to apply their rule. (And some people believe, “Groups are more oppressive when they’re the majority, so I want to re-structure society so we don’t have a majority,” and use that rule to welcome new immigrants.)

And we are really bad at testing whether or not our rules are continuing to be useful in these new situations.

 

Ironically, as our networks have become more effective, our ability to incorporate new information may have actually gone down.

The difficulties large groups experience trying to coordinate and share information force them to become dominated by procedures–set rules of behavior and operation are necessary for large groups to operate. A group of three people can use ad-hoc consensus and rock-paper-scissors to make decisions; a nation of 320 million requires a complex body of laws and regulations.

But it’s getting late, so let’s continue this discussion in the next post.

Are “Nerds” Just a Hollywood Stereotype?

Yes, MIT has a football team.

The other day on Twitter, Nick B. Steves challenged me to find data supporting or refuting his assertion that Nerds vs. Jocks is a false stereotype, invented around 1975. Of course, we HBDers have a saying–“all stereotypes are true,” even the ones about us–but let’s investigate Nick’s claim and see where it leads us.

(NOTE: If you have relevant data, I’d love to see it.)

Unfortunately, terms like “nerd,” “jock,” and “chad” are not all that well defined. Certainly if we define “jock” as “athletic but not smart” and nerd as “smart but not athletic,” then these are clearly separate categories. But what if there’s a much bigger group of people who are smart and athletic?

Or what if we are defining “nerd” and “jock” too narrowly? Wikipedia defines nerd as, “a person seen as overly intellectual, obsessive, or lacking social skills.” I recall a study–which I cannot find right now–which found that nerds had, overall, lower-than-average IQs, but that study included people who were obsessive about things like comic books, not just people who majored in STEM. Similarly, should we define “jock” only as people who are good at sports, or do passionate sports fans count?

For the sake of this post, I will define “nerd” as “people with high math/science abilities” and “jock” as “people with high athletic abilities,” leaving the matter of social skills undefined. (People who merely like video games or watch sports, therefore, do not count.)

Nick is correct on one count: according to Wikipedia, although the word “nerd” has been around since 1951, it was popularized during the 70s by the sitcom Happy Days. However, Wikipedia also notes that:

An alternate spelling,[10] as nurd or gnurd, also began to appear in the mid-1960s or early 1970s.[11] Author Philip K. Dick claimed to have coined the nurd spelling in 1973, but its first recorded use appeared in a 1965 student publication at Rensselaer Polytechnic Institute.[12][13] Oral tradition there holds that the word is derived from knurd (drunk spelled backward), which was used to describe people who studied rather than partied. The term gnurd (spelled with the “g”) was in use at the Massachusetts Institute of Technology by 1965.[14] The term nurd was also in use at the Massachusetts Institute of Technology as early as 1971 but was used in the context for the proper name of a fictional character in a satirical “news” article.[15]

suggesting that the word was already common among nerds themselves before it was picked up by TV.

But we can trace the nerd-jock dichotomy back before the terms were coined: back in 1921, Lewis Terman, a researcher at Stanford University, began a long-term study of exceptionally high-IQ children, the Genetic Studies of Genius aka the Terman Study of the Gifted:

Terman’s goal was to disprove the then-current belief that gifted children were sickly, socially inept, and not well-rounded.

This belief was especially popular in a little nation known as Germany, where it inspired people to take schoolchildren on long hikes in the woods to keep them fit and the mass-extermination of Jews, who were believed to be muddying the German genepool with their weak, sickly, high-IQ genes (and nefariously trying to marry strong, healthy German in order to replenish their own defective stock.) It didn’t help that German Jews were both high-IQ and beset by a number of illnesses (probably related to high rates of consanguinity,) but then again, the Gypsies are beset by even more debilitating illnesses, but no one blames this on all of the fresh air and exercise afforded by their highly mobile lifestyles.

(Just to be thorough, though, the Nazis also exterminated the Gypsies and Hans Asperger’s subjects, despite Asperger’s insistence that they were very clever children who could probably be of great use to the German war effort via code breaking and the like.)

The results of Terman’s study are strongly in Nick’s favor. According to Psychology Today’s  account:

His final group of “Termites” averaged a whopping IQ of 151. Following-up his group 35-years later, his gifted group at mid-life definitely seemed to conform to his expectations. They were taller, healthier, physically better developed, and socially adept (dispelling the myth at the time of high-IQ awkward nerds).

According to Wikipedia:

…the first volume of the study reported data on the children’s family,[17] educational progress,[18] special abilities,[19] interests,[20] play,[21] and personality.[22] He also examined the children’s racial and ethnic heritage.[23] Terman was a proponent of eugenics, although not as radical as many of his contemporary social Darwinists, and believed that intelligence testing could be used as a positive tool to shape society.[3]

Based on data collected in 1921–22, Terman concluded that gifted children suffered no more health problems than normal for their age, save a little more myopia than average. He also found that the children were usually social, were well-adjusted, did better in school, and were even taller than average.[24] A follow-up performed in 1923–1924 found that the children had maintained their high IQs and were still above average overall as a group.

Of course, we can go back even further than Terman–in the early 1800s, allergies like hay fever were associated with the nobility, who of course did not do much vigorous work in the fields.

My impression, based on studies I’ve seen previously, is that athleticism and IQ are positively correlated. That is, smarter people tend to be more athletic, and more athletic people tend to be smarter. There’s a very obvious reason for this: our brains are part of our bodies, people with healthier bodies therefore also have healthier brains, and healthier brains tend to work better.

At the very bottom of the IQ distribution, mentally retarded people tend to also be clumsy, flacid, or lacking good muscle tone. The same genes (or environmental conditions) that make children have terrible health/developmental problems often also affect their brain growth, and conditions that affect their brains also affect their bodies. As we progress from low to average to above-average IQ, we encounter increasingly healthy people.

In most smart people, high-IQ doesn’t seem to be a random fluke, a genetic error, nor fitness reducing: in a genetic study of children with exceptionally high IQs, researchers failed to find many genes that specifically endowed the children with genius, but found instead a fortuitous absence of deleterious genes that knock a few points off the rest of us. The same genes that have a negative effect on the nerves and proteins in your brain probably also have a deleterious effect on the nerves and proteins throughout the rest of your body.

And indeed, there are many studies which show a correlation between intelligence and strength (eg, Longitudinal and Cross-Sectional Assessments of Age Changes in Physical Strength as Related to Sex, Social Class, and Mental Ability) or intelligence and overall health/not dying (eg, Intelligence in young adulthood and cause-specific mortality in the Danish Conscription Database (pdf) and The effects of occupation-based social position on mortality in a large American cohort.)

On the other hand, the evolutionary standard for “fitness” isn’t strength or longevity, but reproduction, and on this scale the high-IQ don’t seem to do as well:

Smart teens don’t have sex (or kiss much either): (h/t Gene Expresion)

Controlling for age, physical maturity, and mother’s education, a significant curvilinear relationship between intelligence and coital status was demonstrated; adolescents at the upper and lower ends of the intelligence distribution were less likely to have sex. Higher intelligence was also associated with postponement of the initiation of the full range of partnered sexual activities. … Higher intelligence operates as a protective factor against early sexual activity during adolescence, and lower intelligence, to a point, is a risk factor.

Source

Here we see the issue plainly: males at 120 and 130 IQ are less likely to get laid than clinically retarded men in 70s and 60s. The right side of the graph are “nerds”, the left side, “jocks.” Of course, the high-IQ females are even less likely to get laid than the high-IQ males, but males tend to judge themselves against other men, not women, when it comes to dating success. Since the low-IQ females are much less likely to get laid than the low-IQ males, this implies that most of these “popular” guys are dating girls who are smarter than themselves–a fact not lost on the nerds, who would also like to date those girls.

 In 2001, the MIT/Wellesley magazine Counterpart (Wellesley is MIT’s “sister school” and the two campuses allow cross-enrollment in each other’s courses) published a sex survey that provides a more detailed picture of nerd virginity:

I’m guessing that computer scientists invented polyamory, and neuroscientists are the chads of STEM. The results are otherwise pretty predictable.

Unfortunately, Counterpoint appears to be defunct due to lack of funding/interest and I can no longer find the original survey, but here is Jason Malloy’s summary from Gene Expression:

By the age of 19, 80% of US males and 75% of women have lost their virginity, and 87% of college students have had sex. But this number appears to be much lower at elite (i.e. more intelligent) colleges. According to the article, only 56% of Princeton undergraduates have had intercourse. At Harvard 59% of the undergraduates are non-virgins, and at MIT, only a slight majority, 51%, have had intercourse. Further, only 65% of MIT graduate students have had sex.

The student surveys at MIT and Wellesley also compared virginity by academic major. The chart for Wellesley displayed below shows that 0% of studio art majors were virgins, but 72% of biology majors were virgins, and 83% of biochem and math majors were virgins! Similarly, at MIT 20% of ‘humanities’ majors were virgins, but 73% of biology majors. (Apparently those most likely to read Darwin are also the least Darwinian!)

College Confidential has one paragraph from the study:

How Rolling Stone-ish are the few lucky souls who are doing the horizontal mambo? Well, not very. Considering all the non-virgins on campus, 41% of Wellesley and 32% of MIT students have only had one partner (figure 5). It seems that many Wellesley and MIT students are comfortingly monogamous. Only 9% of those who have gotten it on at MIT have been with more than 10 people and the number is 7% at Wellesley.

Someone needs to find the original study and PUT IT BACK ON THE INTERNET.

But this lack of early sexual success seems to translate into long-term marital happiness, once nerds find “the one.”Lex Fridman’s Divorce Rates by Profession offers a thorough list. The average divorce rate was 16.35%, with a high of 43% (Dancers) and a low of 0% (“Media and communication equipment workers.”)

I’m not sure exactly what all of these jobs are nor exactly which ones should count as STEM (veterinarian? anthropologists?) nor do I know how many people are employed in each field, but I count 49 STEM professions that have lower than average divorce rates (including computer scientists, economists, mathematical science, statisticians, engineers, biologists, chemists, aerospace engineers, astronomers and physicists, physicians, and nuclear engineers,) and only 23 with higher than average divorce rates (including electricians, water treatment plant operators, radio and telecommunication installers, broadcast engineers, and similar professions.) The purer sciences obviously had lower rates than the more practical applied tech fields.

The big outliers were mathematicians (19.15%), psychologists (19.26%), and sociologists (23.53%), though I’m not sure they count (if so, there were only 22 professions with higher than average divorce rates.)

I’m not sure which professions count as “jock” or “chad,” but athletes had lower than average rates of divorce (14.05%) as did firefighters, soldiers, and farmers. Financial examiners, hunters, and dancers, (presumably an athletic female occupation) however, had very high rates of divorce.

Medical Daily has an article on Who is Most Likely to Cheat? The Top 9 Jobs Unfaithful People Have (according to survey):

According to the survey recently taken by the “infidelity dating website,” Victoria Milan, individuals working in the finance field, such as brokers, bankers, and analysts, are more likely to cheat than those in any other profession. However, following those in finance comes those in the aviation field, healthcare, business, and sports.

With the exception of healthcare and maybe aviation, these are pretty typical Chad occupations, not STEM.

The Mirror has a similar list of jobs where people are most and least likely to be married. Most likely: Dentist, Chief Executive, Sales Engineer, Physician, Podiatrist, Optometrist, Farm product buyer, Precision grinder, Religious worker, Tool and die maker.

Least likely: Paper-hanger, Drilling machine operator, Knitter textile operator, Forge operator, Mail handler, Science technician, Practical nurse, Social welfare clerk, Winding machine operative, Postal clerk.

I struggled to find data on male fertility by profession/education/IQ, but there’s plenty on female fertility, eg the deceptively titled High-Fliers have more Babies:

…American women without any form of high-school diploma have a fertility rate of 2.24 children. Among women with a high-school diploma the fertility rate falls to 2.09 and for women with some form of college education it drops to 1.78.

However, among women with college degrees, the economists found the fertility rate rises to 1.88 and among women with advanced degrees to 1.96. In 1980 women who had studied for 16 years or more had a fertility rate of just 1.2.

As the economists prosaically explain: “The relationship between fertility and women’s education in the US has recently become U-shaped.”

Here is another article about the difference in fertility rates between high and low-IQ women.

But female fertility and male fertility may not be the same–I recall data elsewhere indicating that high-IQ men have more children than low IQ men, which implies those men are having their children with low-IQ women. (For example, while Bill and Hillary seem about matched on IQ, and have only one child, Melania Trump does not seem as intelligent as Trump, who has five children.)

Amusingly, I did find data on fertility rate by father’s profession for 1920, in the Birth Statistics for the Birth Registration Area of the US:

Of the 1,508,874 children born in 1920 in the birth registration area of the United states, occupations of fathers are stated for … 96.9%… The average number of children ever born to the present wives of these occupied fathers is 3.3 and the average number of children living 2.9.

The average number of children ever born ranges from 4.6 for foremen, overseers, and inspectors engaged in the extraction of minerals to 1.8 for soldiers, sailors, and marines. Both of these extreme averages are easily explained, for soldier, sailors and marines are usually young, while such foremen, overseers, and inspectors are usually in middle life. For many occupations, however, the ages of the fathers are presumably about the same and differences shown indicate real differences in the size of families. For example, the low figure for dentists, (2), architects, (2.1), and artists, sculptors, and teachers of art (2.2) are in striking contrast with the figure for mine operatives (4.3), quarry operatives (4.1) bootblacks, and brick and stone masons (each 3.9). …

As a rule the occupations credited with the highest number of children born are also credited with the highest number of children living, the highest number of children living appearing for foremen, overseers, and inspectors engaged in the extraction of minerals (3.9) and for steam and street railroad foremen and overseer (3.8), while if we exclude groups plainly affected by the age of fathers, the highest number of children living appear for mine and quarry operatives (each 3.6).

Obviously the job market was very different in 1920–no one was majoring in computer science. Perhaps some of those folks who became mine and quarry operatives back then would become engineers today–or perhaps not. Here are the average numbers of surviving children for the most obviously STEM professions (remember average for 1920 was 2.9):

Electricians 2.1, Electrotypers 2.2, telegraph operator 2.2, actors 1.9, chemists 1.8, Inventors 1.8, photographers and physicians 2.1, technical engineers 1.9, veterinarians 2.2.

I don’t know what paper hangers do, but the Mirror said they were among the least likely to be married, and in 1920, they had an average of 3.1 children–above average.

What about athletes? How smart are they?

Athletes Show Huge Gaps on SAT Scores” is not a promising title for the “nerds are athletic” crew.

The Journal-Constitution studied 54 public universities, “including the members of the six major Bowl Championship Series conferences and other schools whose teams finished the 2007-08 season ranked among the football or men’s basketball top 25.”…

  • Football players average 220 points lower on the SAT than their classmates. Men’s basketball was 227 points lower.
  • University of Florida won the prize for biggest gap between football players and the student body, with players scoring 346 points lower than their peers.
  • Georgia Tech had the nation’s best average SAT score for football players, 1028 of a possible 1600, and best average high school GPA, 3.39 of a possible 4.0. But because its student body is apparently very smart, Tech’s football players still scored 315 SAT points lower than their classmates.
  • UCLA, which has won more NCAA championships in all sports than any other school, had the biggest gap between the average SAT scores of athletes in all sports and its overall student body, at 247 points.

From the original article, which no longer seems to be up on the Journal-Constitution website:

All 53 schools for which football SAT scores were available had at least an 88-point gap between team members’ average score and the average for the student body. …

Football players performed 115 points worse on the SAT than male athletes in other sports.

The differences between athletes’ and non-athletes’ SAT scores were less than half as big for women (73 points) as for men (170).

Many schools routinely used a special admissions process to admit athletes who did not meet the normal entrance requirements. … At Georgia, for instance, 73.5 percent of athletes were special admits compared with 6.6 percent of the student body as a whole.

On the other hand, as Discover Magazine discusses in “The Brain: Why Athletes are Geniuses,” athletic tasks–like catching a fly ball or slapping a hockey puck–require exceptionally fast and accurate brain signals to trigger the correct muscle movements.

Ryan Stegal studied the GPAs of highschool student athletes vs. non-athletes and found that the athletes had higher average GPAs than the non-athletes, but he also notes that the athletes were required to meet certain minimum GPA requirements in order to play.

But within athletics, it looks like the smarter athletes perform better than dumber ones, which is why the NFL uses the Wonderlic Intelligence Test:

NFL draft picks have taken the Wonderlic test for years because team owners need to know if their million dollar player has the cognitive skills to be a star on the field.

What does the NFL know about hiring that most companies don’t? They know that regardless of the position, proof of intelligence plays a profound role in the success of every individual on the team. It’s not enough to have physical ability. The coaches understand that players have to be smart and think quickly to succeed on the field, and the closer they are to the ball the smarter they need to be. That’s why, every potential draft pick takes the Wonderlic Personnel Test at the combine to prove he does–or doesn’t—have the brains to win the game. …

The first use of the WPT in the NFL was by Tom Landry of the Dallas Cowboys in the early 70s, who took a scientific approach to finding players. He believed players who could use their minds where it counted had a strategic advantage over the other teams. He was right, and the test has been used at the combine ever since.

For the NFL, years of testing shows that the higher a player scores on the Wonderlic, the more likely he is to be in the starting lineup—for any position. “There is no other reasonable explanation for the difference in test scores between starting players and those that sit on the bench,” Callans says. “Intelligence plays a role in how well they play the game.”

Let’s look at Exercising Intelligence: How Research Shows a Link Between Physical Activity and Smarts:

A large study conducted at the Sahlgrenska Academy and Sahlgrenska University Hospital in Gothenburg, Sweden, reveals that young adults who regularly exercise have higher IQ scores and are more likely to go on to university.

The study was published in the Proceedings of the National Academy of Sciences (PNAS), and involved more than 1.2 million Swedish men. The men were performing military service and were born between the years 1950 and 1976. Both their physical and IQ test scores were reviewed by the research team. …

The researchers also looked at data for twins and determined that primarily environmental factors are responsible for the association between IQ and fitness, and not genetic makeup. “We have also shown that those youngsters who improve their physical fitness between the ages of 15 and 18 increase their cognitive performance.”…

I have seen similar studies before, some involving mice and some, IIRC, the elderly. It appears that exercise is probably good for you.

I have a few more studies I’d like to mention quickly before moving on to discussion.

Here’s Grip Strength and Physical Demand of Previous Occupation in a Well-Functioning Cohort of Chinese Older Adults (h/t prius_1995) found that participants who had previously worked in construction had greater grip strength than former office workers.

Age and Gender-Specific Normative Data of Grip and Pinch Strength in a Healthy Adult Swiss Population (h/t prius_1995).

 

If the nerds are in the sedentary cohort, then they be just as athletic if not more athletic than all of the other cohorts except the heavy work.

However, in Revised normative values for grip strength with the Jamar dynamometer, the authors found no effect of profession on grip strength.

And Isometric muscle strength and anthropometric characteristics of a Chinese sample (h/t prius_1995).

And Pumpkin Person has an interesting post about brain size vs. body size.

 

Discussion: Are nerds real?

Overall, it looks like smarter people are more athletic, more athletic people are smarter, smarter athletes are better athletes, and exercise may make you smarter. For most people, the nerd/jock dichotomy is wrong.

However, there is very little overlap at the very highest end of the athletic and intelligence curves–most college (and thus professional) athletes are less intelligent than the average college student, and most college students are less athletic than the average college (and professional) athlete.

Additionally, while people with STEM degrees make excellent spouses (except for mathematicians, apparently,) their reproductive success is below average: they have sex later than their peers and, as far as the data I’ve been able to find shows, have fewer children.

Stephen Hawking

Even if there is a large overlap between smart people and athletes, they are still separate categories selecting for different things: a cripple can still be a genius, but can’t play football; a dumb person can play sports, but not do well at math. Stephen Hawking can barely move, but he’s still one of the smartest people in the world. So the set of all smart people will always include more “stereotypical nerds” than the set of all athletes, and the set of all athletes will always include more “stereotypical jocks” than the set of all smart people.

In my experience, nerds aren’t socially awkward (aside from their shyness around women.) The myth that they are stems from the fact that they have different interests and communicate in a different way than non-nerds. Let nerds talk to other nerds, and they are perfectly normal, communicative, socially functional people. Put them in a room full of non-nerds, and suddenly the nerds are “awkward.”

Unfortunately, the vast majority of people are not nerds, so many nerds have to spend the majority of their time in the company of lots of people who are very different than themselves. By contrast, very few people of normal IQ and interests ever have to spend time surrounded by the very small population of nerds. If you did put them in a room full of nerds, however, you’d find that suddenly they don’t fit in. The perception that nerds are socially awkward is therefore just normie bias.

Why did the nerd/jock dichotomy become so popular in the 70s? Probably in part because science and technology were really taking off as fields normal people could aspire to major in, man had just landed on the moon and the Intel 4004 was released in 1971.  Very few people went to college or were employed in sciences back in 1920; by 1970, colleges were everywhere and science was booming.

And at the same time, colleges and highschools were ramping up their athletics programs. I’d wager that the average school in the 1800s had neither PE nor athletics of any sort. To find those, you’d probably have to attend private academies like Andover or Exeter. By the 70s, though, schools were taking their athletics programs–even athletic recruitment–seriously.

How strong you felt the dichotomy probably depends on the nature of your school. I have attended schools where all of the students were fairly smart and there was no anti-nerd sentiment, and I have attended schools where my classmates were fiercely anti-nerd and made sure I knew it.

But the dichotomy predates the terminology. Take Superman, first 1938. His disguise is a pair of glasses, because no one can believe that the bookish, mild-mannered, Clark Kent is actually the super-strong Superman. Batman is based on the character of El Zorro, created in 1919. Zorro is an effete, weak, foppish nobleman by day and a dashing, sword-fighting hero of the poor by night. Of course these characters are both smart and athletic, but their disguises only work because others do not expect them to be. As fantasies, the characters are powerful because they provide a vehicle for our own desires: for our everyday normal failings to be just a cover for how secretly amazing we are.

But for the most part, most smart people are perfectly fit, healthy, and coordinated–even the ones who like math.

 

Notes on the Muslim Brotherhood

(I’m pretty much starting from scratch)

Sayyid Qutb lived from 1906 – 1966. He was an Egyptian writer, thinker, and leader of the Muslim Brotherhood. He was executed in 1966 for plotting to assassinate the Egyptian president, Nasser.

The Muslim Brotherhood was founded back in 1928 by Islamic scholar Hassan al-Banna. Its goal is to instill the Quran and the Sunnah as the “sole reference point for … ordering the life of the Muslim family, individual, community … and state”;[13] mottos include “Believers are but Brothers”, “Islam is the Solution”, and “Allah is our objective; the Qur’an is the Constitution; the Prophet is our leader; jihad is our way; death for the sake of Allah is our wish”.[14][15]

As of 2015, the MB was considered a terrorist organization by Bahrain,[7][8] Egypt, Russia, Syria, Saudi Arabia and United Arab Emirates.[9][10][11][12]

The MB’s philosophy is pan-Islamist and it wields power in several countries:

323/354 seats in the Sudanese National Assembly,
74/132 seats in the Palestian Legislature,
69/217 seats in the Tunisian assembly,
39/249 seats in the Afghan House,
46/301 seats in Yemen,
16/146 seats in Mauritania,
40/560 seats in Indonesia
2/40 seats in Bahrain
and 4/325 and 1/128 in Iraq and Lebanon, respectively

In 2012, the MB sponsored the elected political party in Egypt (following the January Revolution in 2011,) but has had some trouble in Egypt since then.

The MB also does charity work, runs hospitals, etc., and is clearly using democratic means to to assemble power.

According to Wikipedia:

As Islamic Modernist beliefs were co-opted by secularist rulers and official `ulama, the Brotherhood has become traditionalist and conservative, “being the only available outlet for those whose religious and cultural sensibilities had been outraged by the impact of Westernisation”.[37] Al-Banna believed the Quran and Sunnah constitute a perfect way of life and social and political organization that God has set out for man. Islamic governments must be based on this system and eventually unified in a Caliphate. The Muslim Brotherhood’s goal, as stated by its founder al-Banna was to drive out British colonial and other Western influences, reclaim Islam’s manifest destiny—an empire, stretching from Spain to Indonesia.[38] The Brotherhood preaches that Islam will bring social justice, the eradication of poverty, corruption and sinful behavior, and political freedom (to the extent allowed by the laws of Islam).

Back to Qutb:

In the early 1940s, he encountered the work of Nobel Prize-winner FrencheugenicistAlexis Carrel, who would have a seminal and lasting influence on his criticism of Western civilization, as “instead of liberating man, as the post-Enlightenment narrative claimed, he believed that Western modernity enmeshed people in spiritually numbing networks of control and discipline, and that rather than build caring communities, it cultivated attitudes of selfish individualism. Qutb regarded Carrel as a rare sort of Western thinker, one who understood that his civilization “depreciated humanity” by honouring the “machine” over the “spirit and soul” (al-nafs wa al-ruh). He saw Carrel’s critique, coming as it did from within the enemy camp, as providing his discourse with an added measure of legitimacy.”[24]

From 1948 to 1950, he went to the United States on a scholarship to study its educational system, spending several months at Colorado State College of Education (now the University of Northern Colorado) in Greeley, Colorado. …

Over two years, he worked and studied at Wilson Teachers’ College in Washington, D.C. (one of the precursors to today’s University of the District of Columbia), Colorado State College for Education in Greeley, and Stanford University.[30] He visited the major cities of the United States and spent time in Europe on his journey home. …

On his return to Egypt, Qutb published “The America that I Have Seen”, where he became explicitly critical of things he had observed in the United States, eventually encapsulating the West more generally: its materialism, individual freedoms, economic system, racism, brutal boxing matches, “poor” haircuts,[5] superficiality in conversations and friendships,[32] restrictions on divorce, enthusiasm for sports, lack of artistic feeling,[32] “animal-like” mixing of the sexes (which “went on even in churches”),[33] and strong support for the new Israeli state.[34] Hisham Sabrin, noted that:

“As a brown person in Greeley, Colorado in the late 1940’s studying English he came across much prejudice. He was appalled by what he perceived as loose sexual openness of American men and women (a far cry from his home of Musha, Asyut). This American experience was for him a fine-tuning of his Islamic identity.”…

Qutb concluded that major aspects of American life were primitive and “shocking”, a people who were “numb to faith in religion, faith in art, and faith in spiritual values altogether”. His experience in the U.S. is believed to have formed in part the impetus for his rejection of Western values and his move towards Islamism upon returning to Egypt.

The man has a point. American art has a lot of Jackson Pollock and Andy Warhol schtick.

In 1952, the Egyptian monarchy–which was pro-western–was overthrown by nationalists (?) like Nasser. At first Nasser and Qutb worked together, but there was something of a power struggle and Qutb didn’t approve of Nasser organizing the new Egypt along essentially secular lines instead of Islamic ideology, at which point Qutb tried to have Nasser assassinated and Nasser had Qutb arrested, tortured, and eventually hung.

Aside from the fact that Qutb is Egyptian and Muslim, he and the alt-right have a fair amount in common. (Read his Wikipedia Page if you don’t see what I mean.) The basic critique that the West is immoral, degenerate, has bad art, bad manners, and that capitalism has created a “spiritually numbing” network of control (your boss, office dress codes, the HOA, paperwork), and a return to spirituality (not rejecting science, but enhancing it,) can fix these things.

Unfortunately, the ideology has some bad side effects. His brother, Muhammad Qutb, moved to Saudi Arabia after his release from Egyptian prison and became a professor of Islamic Studies,[96][97] where he promoted Sayyid Qutb’s work. One of Muhammad Qutb’s students/followers was Ayman Zawahiri, who become a member of the Egyptian Islamic Jihad[98] and mentor of Osama bin Laden.

Soraya, empress of Iran, (1953) has no interest in Islamic veiling rules

My impression–Muslim monarchs tend to be secular modernists. They see the tech other countries have (especially bombs) and want it. They see the GDPs other countries have, and want that, too. They’re not that interested in religion (which would limit their behavior) and not that interested in nationalism (as they tend to rule over a variety of different “nations.”) Many monarchs are (or were) quite friendly to the West. The King of Jordan and Shah of Iran come immediately to mind.

(I once met the Director of the CIA. He had a photograph of the King of Jordan in his office. Motioning to the photo, he told me the King was one of America’s friends.)

But modernization isn’t easy. People who have hundreds or thousands of years’ experience living a particular lifestyle are suddenly told to go live a different lifestyle, and aren’t sure how to react. The traditional lifestyle gave people meaning, but the modern lifestyle gives people TV and low infant mortality.

That’s the situation we’re all facing, really.

So what’s a society to do? Sometimes they keep their kings. Sometimes they overthrow them. Then what? You can go nationalist–like Nasser. Communist–like South Yemen. (Though I’m not sure Yemen had a king.) Or Islamic, like Iran. (As far as I can tell, the Iranian revolution had a significant communist element, but the Islamic won out.) The Iranian revolution is in no danger of spreading, though, because the Iranians practice a variety of Islam that’s a rare minority everywhere else in the world.

I hear the Saudis and certain other monarchs have stayed in power so far by using their oil money to keep everyone comfortable (staving off the stresses of modernization) and enforcing Islamic law (keeping the social system familiar.) We’ll see how long this lasts.

So one of the oddities of the Middle East is that while other parts of the world have become more liberal, it appears to have become less. You can find many Before-and-After pictures of places like Iran, where women used to mingle with men, unveiled, in Western-style dress. (In fact, I think the veil was illegal in Iran in the 50s.) War-torn Afghanistan is an even sadder case.

Mohammad Zahir Shah was king of Afghanistan from 1933 through 1973. According to Wikipedia:

“After the end of the Second World War, Zahir Shah recognised the need for the modernisation of Afghanistan and recruited a number of foreign advisers to assist with the process.[12] During this period Afghanistan’s first modern university was founded.[12]… despite the factionalism and political infighting a new constitution was introduced during 1964 which made Afghanistan a modern democratic state by introducing free elections, a parliament, civil rights, women’s rights and universal suffrage.[12]

Mohammad Zahir Shah and his wife, Queen Humaira Begum, visiting JFK at the White House, 1963
credit “Robert Knudsen. White House Photographs. John F. Kennedy Presidential Library and Museum, Boston”

While he was in Italy (undergoing eye surgery and treatment for lumbago,) his cousin executed a coup and instituted a republican government. As we all know, Afghanistan has gone nowhere but up since then.

Zahir Shah returned to Afghanistan in 2002, after the US drove out the Taliban, where he received the title “Father of the Nation” but did not resume duties as monarch. He died in 2007.

His eldest daughter (Princess of Afghanistan?) is Bilqis Begum–Bilqis is the Queen of Sheba’s Islamic name–but she doesn’t have a Wikipedia page. The heir apparent is Ahmad Shah Khan, if you’re looking for someone to crown.

Back to the Muslim Brotherhood.

One of the big differences between elites and commoners is that commoners tend to be far more conservatives than elites. Elites think a world in which they can jet off to Italy for medical treatment sounds awesome, while commoners think this is going to put the local village medic out of a job. Or as the world learned last November, America’s upper and lower classes have very different ideas about borders, globalization, and who should be president.

Similarly, the Muslim Brotherhood seems perfectly happy to use democratic means to come to power where it can.

(The MB apparently does a lot of charity work, which is part of why it is popular.)

The relationship between the MB an Saudi Arabia is interesting. After Egypt cracked down on the MB, thousands of members went to Saudi Arabia. SA needed teachers, and many of the MB were teachers, so it seemed mutually beneficial. The MB thus took over the Saudi educational system, and probably large chunks of their bureaucracy.

Relations soured between SA and the MB due to SA’s decision to let the US base troops there for its war against Iraq, and due to the MB’s involvement in the Arab Spring and active role in Egypt’s democracy–Saudi monarchs aren’t too keen on democracy. In 2014, SA declared the MB a “terrorist organization.”

Lots of people say the MB is a terrorist org, but I’m not sure how that distinguishes them from a whole bunch of other groups in the Middle East. I can’t tell what links the MB has (if any) to ISIS. (While both groups have similar-sounding goals, it’s entirely possible for two different groups to both want to establish an Islamic Caliphate.)

The MB reminds me of the Protestant Reformation, with its emphasis on returning to the Bible as the sole sources of religious wisdom, the establishment of Puritan theocracies, and a couple hundred years of Catholic/Protestant warfare. I blame the Protestant Revolution on the spread of the printing press in Europe, without which the whole idea of reading the Bible for yourself would have been nonsense. I wager something similar happened recently in the Middle East, with cheap copies of the Quran and other religious (and political) texts becoming widely available.

I’ll have to read up on the spread of (cheap) printing in the Islamic world, but a quick search turns up Ami Ayalon’s The Arabic Print Revolution: Cultural Production and Mass Readership:

so that looks like a yes.

What is Cultural Appropriation?

White person offended at the Japanese on behalf of Mexicans, who actually think Mario in a sombrero is awesome

“Cultural appropriation” means “This is mine! I hate you! Don’t touch my stuff!”

Cultural appropriation is one of those newspeak buzz-phrases that sound vaguely like real things, but upon any kind of inspection, completely fall apart. Wikipedia defines Cultural Appropriation as “the adoption or use of the elements of one culture by members of another culture.[1]”, but this is obviously incorrect. By this definition, Louis Armstrong committed cultural appropriation when he learned to play the white man’s trumpet. So is an immigrant who moves to the US and learns English.

Obviously this not what anyone means by cultural appropriation–this is just cultural diffusion, a completely natural, useful, and nearly unstoppable part of life.

A more nuanced definition is that cultural appropriation is “when someone from a more powerful group starts using an element of a less powerful group’s culture.” The idea is that this is somehow harmful to the people of the weaker culture, or at least highly distasteful.

To make an analogy: Let’s suppose you were a total nerd in school. The jocks called you names, locked you in your locker, and stole your lunch money. You were also a huge Heavy Metal fan, for which you were also mocked. The jocks even tried to get the Student Council to pass laws against playing heavy metal at the school dance.

And then one day, the biggest jock in the school shows up wearing a “Me-Tallica” shirt, and suddenly “Me-Tallica” becomes the big new thing among all of the popular kids. Demand skyrockets for tickets to heavy metal concerts, and now you can’t afford to go see your favorite band.

You are about to go apoplectic: “Mine!” you want to yell. “That’s my thing! And it’s pronounced Meh-tallica, you idiots!”

SJWs protest Japanese women sharing Japanese culture with non-Japanese. The sign reads “It wouldn’t be so bad w/out white institutions condoning erasure of the Japanese narrative + orientalism which in turn supports dewomaning + fetishizing AAPI + it is killing us”

How many cases of claimed cultural appropriation does this scenario actually fit? It requires meeting three criteria to count: a group must be widely discriminated against, its culture must be oppressed or denigrated, and then that same culture must be adopted by the oppressors. This is the minimal definition; a more difficult to prove definition requires some actual harm to the oppressed group.

Thing is, there is not a whole lot of official oppression going on in America these days. Segregation ended around the 60s. I’m not sure when the program of forced Native American assimilation via boarding schools ended, but it looks like conditions improved around 1930 and by 1970 the government was actively improving the schools. Japanese and German internment ended with World War II.

It is rather hard to prove oppression–much less cultural oppression–after the 70s. No one is trying to wipe out Native American languages or religious beliefs; there are no laws against rap music or dreadlocks. It’s even harder to prove oppression for recent arrivals whose ancestors didn’t live here during segregation, like most of our Asians and Hispanics (America was about 88% non-Hispanic white and 10% black prior to the 1965 Immigration Act.)

So instead, in cases like the anti-Kimono Wednesdays protest photo above–the claim is inverted:

It wouldn’t be so bad w/out white institutions condoning erasure of the Japanese narrative + orientalism which in turn supports dewomaning + fetishizing AAPI + it is killing us

SJWs objected to Japanese women sharing kimonos with non-Japanese women not because of a history of harm to Japanese people or culture, but because sharing of the kimonos itself is supposedly inspiring harm.

“Orientalism” is one of those words that you probably haven’t encounter unless you’ve had to read Edward Said’s book on the subject (I had to read it twice.) It’s a pretty meaningless concept to Americans, because unlike Monet, we never really went through an Oriental-fascination phase. For good or ill, we just aren’t very interested in learning about non-Americans.

The claim that orientalism is somehow killing Asian American women is strange–are there really serial killers who target Asian ladies specifically because they have a thing for Madame Butterfly?–but at least suggests a verifiable fact: are Asian women disproportionately murdered?

Of course, if you know anything about crime stats, you know that homicide victims tend to be male and most crime is intraracial, not interracial. For example, according to the FBI, of the 12,664 people murdered in 2011, 9,829 were men–about 78%. The FBI’s racial data is only broken down into White (5,825 victims,) Black (6,329,) Other (335), and Unknown (175)–there just aren’t enough Asian homicide victims to count them separately. For women specifically, the number of Other Race victims is only 110–or just a smidge under 1% of total homicides.

And even these numbers are over-estimating the plight of Asian Americans, as Other also includes non-Asians like Native Americans (whose homicide rates are probably much more concerning.)

Call me crazy, but I don’t think kimono-inspired homicides are a real concern.

Kylie Jenner Accused of Cultural Appropriation for Camo Bikini Ad

In practice, SJWs define cultural appropriation as “any time white people use an element from a non-white group’s culture”–or in the recent Kylie Jenner bikini case, “culture” can be expanded to “anything that a person from that other culture ever did, even if millions of other people from other cultures have also done that same thing.” (My best friend in highschool wore camo to prom. My dad wore camo to Vietnam.) And fashion trends come and go–even if Destiny’s Child created a camo bikini trend 16 yeas ago, the trend did not last. Someone else can come along and start a new camo bikini trend.

(Note how TeenVogue does not come to Kyle’s defense by pointing out that these accusations are fundamentally untrue. Anyone can make random, untrue accusations about famous people–schizophrenics do it all the time–but such accusations are not normally considered newsworthy.)

“Cultural appropriation” is such a poorly defined mish-mash of ideas precisely because it isn’t an idea. It’s just an emotion: This is mine, not yours. I hate you and you can’t have it. When white people use the phrase, it takes on a secondary meaning: I am a better white person than you.

 

Guest Post: A Quick History of the Russia Conspiracy Hysteria

EvX: Today we have an Anonymous Guest Post on the History of the Russia Conspiracy Hysteria. (Your normally scheduled anthropology will resume next Friday):

2011: Liberals get excited about Arab Spring. They love the idea of overthrowing dictators and replacing governments across the Middle East with democracies. They largely don’t realize that these democracies will be fundamentalist Islamic states.

Official US government policy supports and assists rebels in Syria against Assad. Leaked emails show how the US supported al Qaeda forces. See Step by Step: How Hillary and Obama Incubated ISIS.

Note that ISIS is also fighting against Assad, putting the US effectively on the ISIS side here. US support flowed to Syrian rebel forces, which may have included ISIS. ISIS is on the side of democracy and multiculturalism, after all.

Russia, meanwhile, is becoming more of a problem for the US Middle East agenda because of its support for Assad. In 2013, this comes to a head with the alleged Assad chemical weapons attack. Everyone gets very upset about chemical weapons and mad at the Russians for supporting Assad. Many calls for regime change in Syria were made. ISIS is also gaining power, and Russia is intervening directly against them. We can’t have Russia bombing ISIS, can we?

As a result, around 2013 Russia started to gain much more prominence as “our” enemy. This is about when I started to see the “Wikileaks is a Russian operation” and “ZeroHedge is Russian propaganda” memes, although there are archives of this theory from as early as 2011–Streetwise Professor: Peas in a PoD: Occupy, RT, and Zero Hedge.

There is, of course, negligible evidence for either of these theories, but that didn’t stop them from spreading. Many hackers have come from Russia over the years, and Russia was surely happy about many of Wikileaks’ releases, but that does not mean that they’re receiving money or orders from Russia.

In 2014, Russia held the Olympics, and around that time there was a lot of publicity about how Russia does not allow gay marriage. Surely only an evil country could prohibit it. Needless to say, I saw little said about Saudi Arabia’s position on gay marriage.

Russia annexed Crimea in 2014, and sanctions were introduced against Russia. Most likely the annexation was opposed because this would mean that Crimean gays would not be able to get married any time soon.

[EvX: I think Anon is being sarcastic here and does actually understand geostrategy.]

The combination of Russian interference in opposition to ISIS plus the annexation of Crimea was just too much for liberals and cuckservatives still opposed to “Soviet” influence, and various aggressive statements toward Russia began to come from Hillary and members of Congress.

Trump enters the presidential race in 2015, and he wonders why we’re opposing Russian actions against ISIS. Why are we taking agressive stands that could lead to war with Russia? What’s in it for Americans?

Obviously could only mean that Trump was a Russian agent. And who would a Russian agent work with but Russian hackers and the Russian Wikileaks agency?

Wikileaks released the DNC emails in July 2016, and they released the Podesta emails shortly before the election. Since Americans were known to not have any access to any of the leaked information, it could only have come from Russian government hackers.

Liberals have assumed that any contacts between the Trump team and Russian diplomats prior to the election were related to illegal coordination to influence or “hack” the election. Never mind that communication between presidential campaigns and foreign diplomats is not uncommon–CNN Politics: Obama Takes Campaign Trail Overseas.

Following the election, Trump associate Flynn might have said to the Russians that the sanctions could possibly be reexamined at some point, thus obviously severely interfering with US diplomatic relations. Of course this statement has been worthy of an extensive FBI investigation.

Most recently we have the “leak” of classified information from Trump to Russia, in which Trump told the Russians to be on the lookout for ISIS bombs smuggled onto planes in laptops. Apparently this is very bad because it’s important for ISIS to successfully bomb Russian civilian planes if they feel like it.

 

Let’s sum up this logic:
Russia is bad because they oppose US efforts to install Islamic fundamentalist governments in the Middle East, because they oppose gay marriage, and because taking Crimea is basically the same as Hitler’s invasion of Poland.

Russia is full of hackers. Assange is a Russian agent since he publishes information leaked from the US. Trump is a Russian agent since he opposes war with Russia.

Russians hacked the DNC and Podesta at Trump’s request and gave the information to Wikileaks. Flynn interfered with US diplomacy. Trump is giving US secrets to Russia.

 

Note the strength of this narrative despite its very flimsy evidence. Investigations into Trump’s “Russian connections” can continue endlessly so long as people believe in them.

Make Athens Great Again

h/t Steve Sailer: Donna Zuckerberg’s Woke Classics Mag Denounces Pericles’ Anti-Immigrant Citizenship Law of 451 BC:

…we need to stop pretending that the worst thing the Athenians ever did was to execute Socrates and openly engage the true dark side of Classical Athens’ anti-immigration policies and the obsession with ethnic purity that lies at the heart of its literature, history, and philosophy….

Known as the Periclean Citizenship Law, the law passed around 451 BCE restricted access to political power and other legal rights to only those born of both a citizen mother and father.

 

You asked. I deliver.

(And yes, I did know about the Periclean Citizenship Law before she brought it up.)

Moderatism? pt 2 Also Lightning

The best arguments (I’ve come up with) in favor of moderation are A. humans are imperfect, so let’s be careful, and B. Let’s avoid holiness spirals. The best argument against it is that sometimes moderatism doesn’t work, either.

But we haven’t defined what moderatism is.

People are generally moderates for four reasons:

  1. They are not very bright, and so cannot understand political or economic arguments well enough to decide whether, say, global warming is real or the budget needs to be balanced, so they don’t.
  2. They are bright enough to evaluate arguments, but they aren’t interested. Economics bores them. So they don’t bother.
  3. They can evaluate arguments and they care, but their opinions don’t slot neatly into “left” or “right”–for example, they may believe simultaneously in fiscal conservatism and gay marriage.
  4. They just like the status quo.

The last group bugs the crap out of me.

There are lots of people who say they want something–say, an end to global warming, or more pie–but won’t actually do anything in support of their goals, like buy a more fuel efficient car or fruit filling. There are also a lot of people who say that they want something–libertarianism, say–but then claim not to want to end up at the logical end of the libertarian road. (Pot smokers who don’t want free association, I’m looking at you.) Plenty of people who supported the Russian Revolution merely wanted to end that awful war with Germany and redistribute some of the land and wealth, not starve millions of Ukrainians to death and turn the whole country into a communist nightmare, but that’s what the revolution got them.

Claiming you want a moderate outcome while supporting an approach that leads somewhere very different is the height of either dishonesty or idiocy.

But back to our question, I think we can define a “moderate” as:

  1. Someone who takes a position between two extremes, (consciously or unconsciously,) often trying to promote consensus;
  2. Someone who wants to preserve the status-quo;
  3. Someone who wants to move in a particular direction, but doesn’t embrace their philosophy’s extreme end.

It would probably amuse most readers of this blog to know that I think of myself as a “moderate.” After all, I hold a lot of ideas that are well outside the American mainstream. But my goals–long-term stability, health, and economic well-being for myself, my friends, family, and the country at large–are pretty normal. I think most people want these things.

But I don’t think continuing the status quo is getting us stability, health, prosperity, etc. The status quo could certainly be worse–I could be on fire right now. But the general trends are not good and have not been good for a long time, and I see neither the traditional “liberal” nor “conservative” solutions as providing a better direction–which is why I am willing to consider some radically new (or old) ideas. (Besides, “moderate” is much easier to explain to strangers than, “I think democracy is deeply flawed.”)

Let’s call this “meta-moderatism”–perhaps we should distinguish here between moderatism of means and moderatism of goals.

Just as holiness spirals only work if you’re actually spiraling into holiness, so consensus only works if you capture actual wisdom.

I think Scott Alexander (of Slate Star Codex) is the most famous principled moderate I know of, though perhaps principled neutralist is a better description–he tries to be meta-consistent in his principles and give his opponents the benefit of the doubt in order to actually understand why they believe what they do–because “moderate” seems vaguely inaccurate to describe any polyamorist.

It occurs to me that democracy seems inclined toward moderatism of means, simply because any candidate has to get a majority (or plurality) of people to vote for them.

 

… You know what? I’m bored. I’m going to research rare forms of lightning.

St. Elmo's fire
St. Elmo’s fire (see also this awesome picture of red sprites.)

(This case actually caused by snow and wind, not a thunderstorm!)

ball lightning
ball lightning

 

red sprites and elf lightning
red sprites and elf lightning
red sprites and elf lightning
red sprites and elf lightning–good explanation of the phenomenon

Should it be “elf lightning” or “lightning elves”?

red sprite and elf lightning
red sprite and elf lightning, photo taken from space

lightning_sprites2

(same source as the previous picture.)

Can one be a principled moderate?

And unto the angel of the church of the Laodiceans write; These things saith the Amen, the faithful and true witness, the beginning of the creation of God; I know thy works, that thou art neither cold nor hot: I would thou wert cold or hot! So then because thou art lukewarm, and neither cold nor hot, I will spue thee out of my mouth. — Revelations, 3:14-16

“No one likes a Jesus freak.” — Anon, the internet

From a memetic point of view, most ideologies would like their adherents to be strong believers. What good to memetic Christianity, after all, is someone who does not bother to spread Christianity? As a matter of principle, there is something hypocritical–intellectually inconsistent or dishonest–about people who profess to believe an ideology, but lay down some boundary beyond which they do not bother to follow it.

And yet, at the same time, we often feel a very practical aversion to ideological extremists. People who believe in social safety nets so because they don’t want poor people to starve in the streets may also genuinely believe that communism was a disaster.

Ideologies are rather like maps, and I have yet to encounter a map that accurately reflected every aspect of the Earth’s surface at once (Mercator maps of Greenland, I am looking at you.) The world is a complicated place, and all ideological models seek to illuminate human behavior by reducing them to understandable patterns.

Like any map, this is both a strength and a weakness. We do not throw out a map because it is imperfect; even a Mercator map is still a valuable tool. We also do not deny the existence of a sandbar we have just struck simply because it is not on our charts. Even religions, which profess perfection due to divine revelation, must still be actually put into practice by obviously imperfect human believers.

In extreme versions of ideologies, the goal often ceases to be some practical, real world outcome, and becomes instead proving one’s own ideological purity. SJWs are the most common embodiment of this tendency, arguing endlessly over matters like, “Does Goldiblocks’s advertising/packaging de-value girls’ princess play?” or “Asking immigrants not to rape is racist colonialization of POC bodies.” There are many organizations out there trying to decrease the number of black people who are murdered every year, but you have probably never heard of any of the successful ones. By contrast, the one group liberals actually support and pay attention to, “Black Lives Matter,” has, by driving police out of black communities, actually increased the number of black people who’ve been murdered.

Within the holiness spiral, actually denying reality becomes the easiest way to prove to be even holier than the next guy. The doctrine of transubstantiation claims that a piece of bread has been transformed into the body of Christ even though no physical, observable change has occurred. Almost everyone agrees that the police shouldn’t choke people to death during routine arrests; it takes true devotion to believe that the police shouldn’t shoot back at people who are shooting at them.

A holiness spiral is only useful if you’re actually spiraling into holiness.

The simple observation that extreme versions of ideologies often seem to lead their followers to lose contact with reality is perhaps reason enough for someone to profess some form of principled moderatism.

And yet, I know for certain that were I a religious person, I would not be moderate. (I base this on my childhood approach to religion and the observances of my biological relatives–I wager I have a genetic inclination toward intense religiosity.) Since few people convert away from the religion they were raised with, if I were a believer from a Hindu family, I’d be a devout Hindu; if I were a believer from a Catholic family, I’d attend mass in Latin; if Jewish, I’d be Orthodox Jewish. You get the picture.

After all, what is the point of going to Heaven (or Hell,) only a little bit?

To be continued.