Do Sufficiently Large Organizations Start Acting Like Malevolent AIs? (pt 2)

(Part 1 is here)

As we were discussing on Monday, as our networks have become more effective, our ability to incorporate new information may have actually gone down. Ironically, as we add more people to a group–beyond a certain limit–it becomes more difficult for individuals with particular expertise to convince everyone else in the group that the group’s majority consensus is wrong.

The difficulties large groups experience trying to coordinate and share information force them to become dominated by procedures–set rules of behavior and operation are necessary for large groups to operate. A group of three people can use ad-hoc consensus and rock-paper-scissors to make decisions; a nation of 320 million requires a complex body of laws and regulations. (I once tried to figure out just how many laws and regulations America has. The answer I found was that no one knows.)

An organization is initially founded to accomplish some purpose that benefits its founders–generally to make them well-off, but often also to produce some useful good or service. A small organization is lean, efficient, and generally exemplifies the ideals put forth in Adam Smith’s invisible hand:

It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest. We address ourselves, not to their humanity but to their self-love, and never talk to them of our necessities but of their advantages. —The Wealth Of Nations, Book I

As an organization ages and grows, its founders retire or move on, it becomes more dependent on policies and regulations and each individual employee finds his own incentives further displaced from the company’s original intentions. Soon a company is no longer devoted to either the well-being of its founders or its customers, but to the company itself. (And that’s kind of a best-case scenario in which the company doesn’t just disintegrate into individual self-interest.)

I am reminded of a story about a computer that had been programmed to play Tetris–actually, it had been programmed not to lose at Tetris. So the computer paused the game. A paused game cannot lose.

What percentage of employees (especially management) have been incentivized to win? And what percentage are being incentivized to not lose?

And no, I don’t mean that in some 80s buzzword-esque way. Most employees have more to lose (ie, their jobs) if something goes wrong as a result of their actions than to gain if something goes right. The stockholders might hope that employees are doing everything they can to maximize profits, but really, most people are trying not to mess up and get fired.

Fear of messing up goes beyond the individual scale. Whole companies are goaded by concerns about risk–“Could we get sued?” Large corporation have entire legal teams devoted to telling them how they could get sued for whatever their doing and to filing lawsuits against their competitors for whatever they’re doing.

This fear of risk carries over, in turn, to government regulations. As John Sanphillipo writes in City Regulatory Hurdles Favor Big Developers, not the Little Guy:

A family in a town I visited bought an old fire station a few years ago with the intention of turning it into a Portuguese bakery and brewpub. They thought they’d have to retrofit the interior of the building to meet health and safety standards for such an establishment.

Turns out the cost of bringing the landscape around the outside of the building up to code was their primary impediment. Mandatory parking requirements, sidewalks, curb cuts, fire lanes, on-site stormwater management, handicapped accessibility, drought-tolerant native plantings…it’s a very long list that totaled $340,000 worth of work. … Guess what? They decided not to open the bakery or brewery. …

Individually it’s impossible to argue against each of the particulars. Do you really want to deprive people in wheelchairs of the basic civil right of public accommodation? Do you really want the place to catch fire and burn? Do you want a barren landscape that’s bereft of vegetation? …

I was in Hamtramck, Michigan a couple of years ago to participate in a seminar about reactivating neighborhoods through incremental small-scale development. …

While the event was underway the fire marshal happened to drive by and noticed there were people—a few dozen actual humans—occupying a commercial building in broad daylight. In a town that has seen decades of depopulation and disinvestment, this was an odd sight. And he was worried. Do people have permission for this kind of activity? Had there been an inspection? Was a permit issued? Is everything insured? He called one of his superiors to see if he should shut things down in the name of public safety.

It’s a good article. You should read the whole thing.

Back in Phillipe Bourgeois’s In Search of Respect: Selling Crack in el Barrio, Phillipe describes one drug dealer’s attempt to use the money he’d made to go into honest business by opening a convenience store. Unfortunately, he couldn’t get the store complaint with NYC disability-access regulations, and so the store never opened and the owner went back to dealing drugs. (What IQ, I wonder, is necessary to comply with all of these laws and regulations in the first place?)

Now, I’m definitely in favor of disabled people being able to buy groceries and use bathrooms. But what benefits a disabled person more: a convenience store that’s not fully wheel-chair accessible, or a crack house?

In My IRB Nightmare, Scott Alexander writes about trying to do a simple study to determine whether the screening test already being used to diagnose people with bipolar disorder is effective at diagnosing them:

When we got patients, I would give them the bipolar screening exam and record the results. Then Dr. W. would conduct a full clinical interview and formally assess them. We’d compare notes and see how often the screening test results matched Dr. W’s expert diagnosis.

Remember, they were already using the screening test on patients and then having them talk to the doctor for a formal assessment. The only thing the study added was that Scott would compare how well the screening results matched the formal assessment. No patients would be injected, subject to new procedures, or even asked different questions. They just wanted to compare two data sets.

After absurd quantities of paperwork and an approval process much too long to summarize here, the project got audited:

I kept the audit report as a souvenier. I have it in front of me now. Here’s an example infraction:

The data and safety monitoring plan consists of ‘the Principal Investigator will randomly check data integrity’. This is a prospective study with a vulnerable group (mental illness, likely to have diminished capacity, likely to be low income) and, as such, would warrant a more rigorous monitoring plan than what is stated above. In addition to the above, a more adequate plan for this study would also include review of the protocol at regular intervals, on-going checking of any participant complaints or difficulties with the study, monitoring that the approved data variables are the only ones being collected, regular study team meetings to discuss progress and any deviations or unexpected problems. Team meetings help to assure participant protections, adherence to the protocol. Having an adequate monitoring plan is a federal requirement for the approval of a study. See Regulation 45 CFR 46.111 Criteria For IRB Approval Of Research. IRB Policy: PI Qualifications And Responsibility In Conducting Research. Please revise the protocol via a protocol revision request form. Recommend that periodic meetings with the research team occur and be documented.

… Faced with submitting twenty-seven new pieces of paperwork to correct our twenty-seven infractions, Dr. W and I gave up. We shredded the patient data and the Secret Code Log. We told all the newbies they could give up and go home. … We told the IRB that they had won, fair and square; we surrendered unconditionally.

The point of all that paperwork and supervision is to make sure that no one replicates the Tuskegee Syphilis Experiment nor the Nazi anything. Noble sentiments–but as a result, a study comparing two data sets had to be canceled.

I’ve noticed recently that much of the interesting medical research is happening in the third world/China–places where the regulations aren’t as strong and experiments (of questionable ethics or not) can actually get done.

Like the computer taught not to lose at Tetris, all of these systems are more focused on minimizing risk–even non-existent risk–than on actually succeeding.

In his review of Yudkowsky’s Inadequate Equilibria, Scott writes:

…[Yudkowsky] continues to the case of infant parenteral nutrition. Some babies have malformed digestive systems and need to have nutrient fluid pumped directly into their veins. The nutrient fluid formula used in the US has the wrong kinds of lipids in it, and about a third of babies who get it die of brain or liver damage. We’ve known for decades that the nutrient fluid formula has the wrong kind of lipids. We know the right kind of lipids and they’re incredibly cheap and there is no reason at all that we couldn’t put them in the nutrient fluid formula. We’ve done a bunch of studies showing that when babies get the right nutrient fluid formula, the 33% death rate disappears. But the only FDA-approved nutrient fluid formula is the one with the wrong lipids, so we just keep giving it to babies, and they just keep dying. Grant that the FDA is terrible and ruins everything, but over several decades of knowing about this problem and watching the dead babies pile up, shouldn’t somebody have done something to make this system work better?

The doctors have to use the FDA-approved formula or they could get sued for malpractice. The insurance companies, of course, only cover the FDA-approved formula. The formula makers are already making money selling the current formula and would probably have to go through an expensive, multi-year review system (with experiments far more regulated than Scott’s) to get the new formula approved, and even then they might not actually get approval. In short, on one side are people in official positions of power whose lives could be made worse (or less convenient) if they tried to fix the problem, and on the other side are dead babies who can’t stand up for themselves.

The Chankiri Tree (Killing Tree) where infants were fatally smashed, Choeung Ek, Cambodia.

Communism strikes me as the ultimate expression of this beast: a society fully transformed into a malevolent AI. It’s impossible to determine exactly how many people were murdered by communism, but the Black Book of Communism estimates a death toll between 85 and 100 million people.

Capitalism, for all its faults, is at least somewhat decentralized. If you make a bad business decision, you suffer the consequences and can hopefully learn from your mistakes and make better decisions in the future. But in communist systems, one central planner’s bad decisions can cause suffering for millions of other people, resulting in mass death. Meanwhile, the central planner may suffer for correcting the bad decision. Centralized economies simply lack the feedback loops necessary to fix problems before they start killing people.

While FDA oversight of medicines is probably important, would it be such a bad thing if a slightly freer market in parenteral nutrition allowed parents to chose between competing brands of formula, each promising not to kill your baby?

Of course, capitalism isn’t perfect, either. SpottedToad recently had an interesting post, 2010s Identity Politics as Hostile AI:

There’s an interesting post mortem on the rise and fall of the clickbait liberalism site Mic.com, that attracted an alleged 65 million unique visitors on the strength of Woketastic personal stories like “5 Powerful Reasons I’m a (Male) Feminist,” …

Every time Mic had a hit, it would distill that success into a formula and then replicate it until it was dead. Successful “frameworks,” or headlines, that went through this process included “Science Proves TK,” “In One Perfect Tweet TK,” “TK Reveals the One Brutal Truth About TK,” and “TK Celebrity Just Said TK Thing About TK Issue. Here’s why that’s important.” At one point, according to an early staffer who has since left, news writers had to follow a formula with bolded sections, which ensured their stories didn’t leave readers with any questions: The intro. The problem. The context. The takeaway.

…But the success of Mic.com was due to algorithms built on top of algorithms. Facebook targets which links are visible to users based on complex and opaque rules, so it wasn’t just the character of the 2010s American population that was receptive to Mic.com’s specific brand of SJW outrage clickbait, but Facebook’s rules for which articles to share with which users and when. These rules, in turn, are calibrated to keep users engaged in Facebook as much as possible and provide the largest and most receptive audience for its advertisers, as befits a modern tech giant in a two-sided market.

Professor Bruce Charlton has a post about Head Girl Syndrome–the Opposite of Creative Genius that is good and short enough that I wish I could quote the whole thing. A piece must suffice:

The ideal Head Girl is an all-rounder: performs extremely well in all school subjects and has a very high Grade Point Average. She is excellent at sports, Captaining all the major teams. She is also pretty, popular, sociable and well-behaved.

The Head Girl will probably be a big success in life, in whatever terms being a big success happens to be framed …

But the Head Girl is not, cannot be, a creative genius. …

The more selective the social system, the more it will tend to privilege the Head Girl and eliminate the creative genius.

Committees, peer review processes, voting – anything which requires interpersonal agreement and consensus – will favour the Head Girl and exclude the creative genius.  …

*

We live in a Head Girl’s world – which is also a world where creative genius is marginalized and disempowered to the point of near-complete invisibility.

The quest for social status is, I suspect, one of the things driving the system. Status-oriented people refuse to accept information that comes from people lower status than themselves, which renders system feedback even more difficult. The internet as a medium of information sharing is beautiful; the internet as a medium of status signalling is horrible.

So what do you think? Do sufficiently large organization start acting like malevolent (or hostile) AIs?

(Back to Part 1)

Advertisements

Do Sufficiently Large Organizations Start Acting Like Malevolent AIs? (pt 1)

(and Society is an Extremely Large Organization)

What do I mean by malevolent AI?

AI typically refers to any kind of intelligence or ability to learn possessed by machines. Malevolent AI occurs when a machine pursues its programmed objectives in a way that humans find horrifying or immoral. For example, a machine programmed to make paperclips might decide that the easiest way to maximize paperclip production is to enslave humans to make paperclips for it. Superintelligent AI is AI that has figured out how to make itself smarter and thus keeps getting smarter and smarter. (Should we develop malevolent superintelligent AI, then we’ll really have something to worry about.)

Note: people who actually study AI probably have better definitions than I do.

While we like to think of ourselves (humans) as unique, thinking individuals, it’s clear that many of our ideas come from other people. Chances are good you didn’t think up washing your hands or brushing your teeth by yourself, but learned about them from your parents. Society puts quite a bit of effort, collectively speaking, into teaching children all of the things people have learned over the centuries–from heliocentrism to the fact that bleeding patients generally makes diseases worse, not better.

Just as we cannot understand the behavior of ants or bees simply by examining the anatomy of a single ant or single bee, but must look at the collective life of the entire colony/hive, so we cannot understand human behavior by merely examining a single human, but must look at the collective nature of human societies. “Man is a political animal,” whereby Aristotle did not mean that we are inherently inclined to fight over transgender bathrooms, but instinctively social:

Hence it is evident that the state is a creation of nature, and that man is by nature a political animal. And he who by nature and not by mere accident is without a state, is either above humanity, or below it; he is the ‘Tribeless, lawless, hearthless one,’ whom Homer denounces—the outcast who is a lover of war; he may be compared to a bird which flies alone.

Now the reason why man is more of a political animal than bees or any other gregarious animals is evident. Nature, as we often say, makes nothing in vain, and man is the only animal whom she has endowed with the gift of speech. And whereas mere sound is but an indication of pleasure or pain, and is therefore found in other animals (for their nature attains to the perception of pleasure and pain and the intimation of them to one another, and no further), the power of speech is intended to set forth the expedient and inexpedient, and likewise the just and the unjust. And it is a characteristic of man that he alone has any sense of good and evil, of just and unjust, and the association of living beings who have this sense makes a family and a state. –Aristotle, Politics

With very rare exceptions, humans–all humans, in all parts of the world–live in groups. Tribes. Families. Cities. Nations. Our nearest primate relatives, chimps and bonobos, also live in groups. Primates are social, and their behavior can only be understood in the context of their groups.

Groups of humans are able to operate in ways that individual humans cannot, drawing on the collective memories, skills, and knowledge of their members to create effects much greater than what could be achieved by each person acting alone. For example, one lone hunter might be able to kill a deer–or if he is extremely skilled, hardworking, and lucky, a dozen deer–but ten hunters working together can drive an entire herd of deer over a cliff, killing hundreds or even thousands. (You may balk at the idea, but many traditional hunting societies were dependent on only a few major hunts of migrating animals to provide the majority of their food for the entire year–meaning that those few hunts had to involve very high numbers of kills or else the entire tribe would starve while waiting for the animals to return.)

Chimps have never, to my knowledge, driven megafauna to extinction–but humans have a habit of doing so wherever they go. Humans are great at what we do, even if we aren’t always great at extrapolating long-term trends.

But the beneficial effects of human cooperation don’t necessarily continue to increase as groups grow larger–China’s 1.3 billion people don’t appear to have better lives than Iceland’s 332,000 people. Indeed, there probably is some optimal size–depending on activity and available communications technology–beyond which the group struggles to coordinate effectively and begins to degenerate.

CBS advises us to make groups of 7:

As it turns out, seven is a great number for not only forming an effective fictional fighting force, but also for task groups that use spreadsheets instead of swords to do their work.

That’s according to the new book Decide & Deliver: 5 Steps to Breakthrough Performance in Your Organization (Harvard Business Press).

Once you’ve got 7 people in a group, each additional member reduces decision effectiveness by 10%, say the authors, Marcia W. Blenko, Michael C. Mankins, and Paul Rogers.

Unsurprisingly, a group of 17 or more rarely makes a decision other than when to take a lunch break.

Princeton blog reports:

The trope that the likelihood of an accurate group decision increases with the abundance of brains involved might not hold up when a collective faces a variety of factors — as often happens in life and nature. Instead, Princeton University researchers report that smaller groups actually tend to make more accurate decisions, while larger assemblies may become excessively focused on only certain pieces of information. …

collective decision-making has rarely been tested under complex, “realistic” circumstances where information comes from multiple sources, the Princeton researchers report in the journal Proceedings of the Royal Society B. In these scenarios, crowd wisdom peaks early then becomes less accurate as more individuals become involved, explained senior author Iain Couzin, a professor of ecology and evolutionary biology. …

The researchers found that the communal ability to pool both pieces of information into a correct, or accurate, decision was highest in a band of five to 20. After that, the accurate decision increasingly eluded the expanding group.

Couzin found that in small groups, people with specialized knowledge could effectively communicate that to the rest of the group, whereas in larger groups, they simply couldn’t convey their knowledge to enough people and group decision-making became dominated by the things everyone knew.

If you could travel back in time and propose the idea of democracy to the inhabitants of 13th century England, they’d respond with incredulity: how could peasants in far-flung corners of the kingdom find out who was running for office? Who would count the votes? How many months would it take to tally up the results, determine who won, and get the news back to the outlying provinces? If you have a printing press, news–and speeches–can quickly and accurately spread across large distances and to large numbers of people, but prior to the press, large-scale democracy simply wasn’t practical.

Likewise, the communism of 1917 probably couldn’t have been enacted in 1776, simply because society at that time didn’t have the technology yet to gather all of the necessary data on crop production, factory output, etc. (As it was, neither did Russia of 1917, but they were closer.)

Today, the amount of information we can gather and share on a daily basis is astounding. I have at my fingertips the world’s greatest collection of human knowledge, an overwhelming torrent of data.

All of our these information networks have linked society together into an increasingly efficient meta-brain–unfortunately, it’s not a very smart meta-brain. Like the participants in Couzin’s experiments, we are limited to what “everyone knows,” stymied in our efforts to impart more specialized knowledge. (I don’t know about you, but I find being shouted down by a legion of angry people who know less about a subject than I do one of the particularly annoying features of the internet.)

For example, there’s been a lot of debate lately about immigration, but how much do any of us really know about immigrants or immigrant communities? How much of this debate is informed by actual knowledge of the people involved, and how much is just people trying to extend vague moral principles to cover novel situations? I recently had a conversation with a progressive acquaintance who justified mass-immigration on the grounds that she has friendly conversations with the cabbies in her city. Heavens protect us–I hope to get along with people as friends and neighbors, not just when I am paying them!

One gets the impression in conversation with Progressives that they regard Christian Conservatives as a real threat, because that group that can throw its weight around in elections or generally enforce cultural norms that liberals don’t like, but are completely oblivious to the immigrants’ beliefs. Most of our immigrants hail from countries that are rather more conservative than the US and definitely more conservative than our liberals.

Any sufficiently intelligent democracy ought to be able to think critically about the political opinions of the new voters it is awarding citizenship to, but we struggle with this. My Progressive acquaintance seems think that we can import an immense, conservative, third-world underclass and it will stay servile indefinitely, not vote its own interests or have any effects on social norms. (Or its interests will be, coincidentally, hers.)

This is largely an information problem–most Americans are familiar with our particular brand of Christian conservatives, but are unfamiliar with Mexican or Islamic ones.

How many Americans have intimate, detailed knowledge of any Islamic society? Very few of us who are not Muslim ourselves speak Arabic, and few Muslim countries are major tourist destinations. Aside from the immigrants themselves, soldiers, oil company employees, and a handful of others have spent time in Islamic countries, but that’s about it–and no one is making any particular effort to listen to their opinions. (It’s a bit sobering to realize that I know more about Islamic culture than 90% of Americans and I still don’t really know anything.)

So instead of making immigration policy based on actual knowledge of the groups involved, people try to extend the moral rules–heuristics–they already have. So people who believe that “religious tolerance is good,” because this rule has generally been useful in preventing conflict between American religious groups, think this rule should include Muslim immigrants. People who believe, “I like being around Christians,” also want to apply their rule. (And some people believe, “Groups are more oppressive when they’re the majority, so I want to re-structure society so we don’t have a majority,” and use that rule to welcome new immigrants.)

And we are really bad at testing whether or not our rules are continuing to be useful in these new situations.

 

Ironically, as our networks have become more effective, our ability to incorporate new information may have actually gone down.

The difficulties large groups experience trying to coordinate and share information force them to become dominated by procedures–set rules of behavior and operation are necessary for large groups to operate. A group of three people can use ad-hoc consensus and rock-paper-scissors to make decisions; a nation of 320 million requires a complex body of laws and regulations.

But it’s getting late, so let’s continue this discussion in the next post.

Are “Nerds” Just a Hollywood Stereotype?

Yes, MIT has a football team.

The other day on Twitter, Nick B. Steves challenged me to find data supporting or refuting his assertion that Nerds vs. Jocks is a false stereotype, invented around 1975. Of course, we HBDers have a saying–“all stereotypes are true,” even the ones about us–but let’s investigate Nick’s claim and see where it leads us.

(NOTE: If you have relevant data, I’d love to see it.)

Unfortunately, terms like “nerd,” “jock,” and “chad” are not all that well defined. Certainly if we define “jock” as “athletic but not smart” and nerd as “smart but not athletic,” then these are clearly separate categories. But what if there’s a much bigger group of people who are smart and athletic?

Or what if we are defining “nerd” and “jock” too narrowly? Wikipedia defines nerd as, “a person seen as overly intellectual, obsessive, or lacking social skills.” I recall a study–which I cannot find right now–which found that nerds had, overall, lower-than-average IQs, but that study included people who were obsessive about things like comic books, not just people who majored in STEM. Similarly, should we define “jock” only as people who are good at sports, or do passionate sports fans count?

For the sake of this post, I will define “nerd” as “people with high math/science abilities” and “jock” as “people with high athletic abilities,” leaving the matter of social skills undefined. (People who merely like video games or watch sports, therefore, do not count.)

Nick is correct on one count: according to Wikipedia, although the word “nerd” has been around since 1951, it was popularized during the 70s by the sitcom Happy Days. However, Wikipedia also notes that:

An alternate spelling,[10] as nurd or gnurd, also began to appear in the mid-1960s or early 1970s.[11] Author Philip K. Dick claimed to have coined the nurd spelling in 1973, but its first recorded use appeared in a 1965 student publication at Rensselaer Polytechnic Institute.[12][13] Oral tradition there holds that the word is derived from knurd (drunk spelled backward), which was used to describe people who studied rather than partied. The term gnurd (spelled with the “g”) was in use at the Massachusetts Institute of Technology by 1965.[14] The term nurd was also in use at the Massachusetts Institute of Technology as early as 1971 but was used in the context for the proper name of a fictional character in a satirical “news” article.[15]

suggesting that the word was already common among nerds themselves before it was picked up by TV.

But we can trace the nerd-jock dichotomy back before the terms were coined: back in 1921, Lewis Terman, a researcher at Stanford University, began a long-term study of exceptionally high-IQ children, the Genetic Studies of Genius aka the Terman Study of the Gifted:

Terman’s goal was to disprove the then-current belief that gifted children were sickly, socially inept, and not well-rounded.

This belief was especially popular in a little nation known as Germany, where it inspired people to take schoolchildren on long hikes in the woods to keep them fit and the mass-extermination of Jews, who were believed to be muddying the German genepool with their weak, sickly, high-IQ genes (and nefariously trying to marry strong, healthy German in order to replenish their own defective stock.) It didn’t help that German Jews were both high-IQ and beset by a number of illnesses (probably related to high rates of consanguinity,) but then again, the Gypsies are beset by even more debilitating illnesses, but no one blames this on all of the fresh air and exercise afforded by their highly mobile lifestyles.

(Just to be thorough, though, the Nazis also exterminated the Gypsies and Hans Asperger’s subjects, despite Asperger’s insistence that they were very clever children who could probably be of great use to the German war effort via code breaking and the like.)

The results of Terman’s study are strongly in Nick’s favor. According to Psychology Today’s  account:

His final group of “Termites” averaged a whopping IQ of 151. Following-up his group 35-years later, his gifted group at mid-life definitely seemed to conform to his expectations. They were taller, healthier, physically better developed, and socially adept (dispelling the myth at the time of high-IQ awkward nerds).

According to Wikipedia:

…the first volume of the study reported data on the children’s family,[17] educational progress,[18] special abilities,[19] interests,[20] play,[21] and personality.[22] He also examined the children’s racial and ethnic heritage.[23] Terman was a proponent of eugenics, although not as radical as many of his contemporary social Darwinists, and believed that intelligence testing could be used as a positive tool to shape society.[3]

Based on data collected in 1921–22, Terman concluded that gifted children suffered no more health problems than normal for their age, save a little more myopia than average. He also found that the children were usually social, were well-adjusted, did better in school, and were even taller than average.[24] A follow-up performed in 1923–1924 found that the children had maintained their high IQs and were still above average overall as a group.

Of course, we can go back even further than Terman–in the early 1800s, allergies like hay fever were associated with the nobility, who of course did not do much vigorous work in the fields.

My impression, based on studies I’ve seen previously, is that athleticism and IQ are positively correlated. That is, smarter people tend to be more athletic, and more athletic people tend to be smarter. There’s a very obvious reason for this: our brains are part of our bodies, people with healthier bodies therefore also have healthier brains, and healthier brains tend to work better.

At the very bottom of the IQ distribution, mentally retarded people tend to also be clumsy, flacid, or lacking good muscle tone. The same genes (or environmental conditions) that make children have terrible health/developmental problems often also affect their brain growth, and conditions that affect their brains also affect their bodies. As we progress from low to average to above-average IQ, we encounter increasingly healthy people.

In most smart people, high-IQ doesn’t seem to be a random fluke, a genetic error, nor fitness reducing: in a genetic study of children with exceptionally high IQs, researchers failed to find many genes that specifically endowed the children with genius, but found instead a fortuitous absence of deleterious genes that knock a few points off the rest of us. The same genes that have a negative effect on the nerves and proteins in your brain probably also have a deleterious effect on the nerves and proteins throughout the rest of your body.

And indeed, there are many studies which show a correlation between intelligence and strength (eg, Longitudinal and Cross-Sectional Assessments of Age Changes in Physical Strength as Related to Sex, Social Class, and Mental Ability) or intelligence and overall health/not dying (eg, Intelligence in young adulthood and cause-specific mortality in the Danish Conscription Database (pdf) and The effects of occupation-based social position on mortality in a large American cohort.)

On the other hand, the evolutionary standard for “fitness” isn’t strength or longevity, but reproduction, and on this scale the high-IQ don’t seem to do as well:

Smart teens don’t have sex (or kiss much either): (h/t Gene Expresion)

Controlling for age, physical maturity, and mother’s education, a significant curvilinear relationship between intelligence and coital status was demonstrated; adolescents at the upper and lower ends of the intelligence distribution were less likely to have sex. Higher intelligence was also associated with postponement of the initiation of the full range of partnered sexual activities. … Higher intelligence operates as a protective factor against early sexual activity during adolescence, and lower intelligence, to a point, is a risk factor.

Source

Here we see the issue plainly: males at 120 and 130 IQ are less likely to get laid than clinically retarded men in 70s and 60s. The right side of the graph are “nerds”, the left side, “jocks.” Of course, the high-IQ females are even less likely to get laid than the high-IQ males, but males tend to judge themselves against other men, not women, when it comes to dating success. Since the low-IQ females are much less likely to get laid than the low-IQ males, this implies that most of these “popular” guys are dating girls who are smarter than themselves–a fact not lost on the nerds, who would also like to date those girls.

 In 2001, the MIT/Wellesley magazine Counterpart (Wellesley is MIT’s “sister school” and the two campuses allow cross-enrollment in each other’s courses) published a sex survey that provides a more detailed picture of nerd virginity:

I’m guessing that computer scientists invented polyamory, and neuroscientists are the chads of STEM. The results are otherwise pretty predictable.

Unfortunately, Counterpoint appears to be defunct due to lack of funding/interest and I can no longer find the original survey, but here is Jason Malloy’s summary from Gene Expression:

By the age of 19, 80% of US males and 75% of women have lost their virginity, and 87% of college students have had sex. But this number appears to be much lower at elite (i.e. more intelligent) colleges. According to the article, only 56% of Princeton undergraduates have had intercourse. At Harvard 59% of the undergraduates are non-virgins, and at MIT, only a slight majority, 51%, have had intercourse. Further, only 65% of MIT graduate students have had sex.

The student surveys at MIT and Wellesley also compared virginity by academic major. The chart for Wellesley displayed below shows that 0% of studio art majors were virgins, but 72% of biology majors were virgins, and 83% of biochem and math majors were virgins! Similarly, at MIT 20% of ‘humanities’ majors were virgins, but 73% of biology majors. (Apparently those most likely to read Darwin are also the least Darwinian!)

College Confidential has one paragraph from the study:

How Rolling Stone-ish are the few lucky souls who are doing the horizontal mambo? Well, not very. Considering all the non-virgins on campus, 41% of Wellesley and 32% of MIT students have only had one partner (figure 5). It seems that many Wellesley and MIT students are comfortingly monogamous. Only 9% of those who have gotten it on at MIT have been with more than 10 people and the number is 7% at Wellesley.

Someone needs to find the original study and PUT IT BACK ON THE INTERNET.

But this lack of early sexual success seems to translate into long-term marital happiness, once nerds find “the one.”Lex Fridman’s Divorce Rates by Profession offers a thorough list. The average divorce rate was 16.35%, with a high of 43% (Dancers) and a low of 0% (“Media and communication equipment workers.”)

I’m not sure exactly what all of these jobs are nor exactly which ones should count as STEM (veterinarian? anthropologists?) nor do I know how many people are employed in each field, but I count 49 STEM professions that have lower than average divorce rates (including computer scientists, economists, mathematical science, statisticians, engineers, biologists, chemists, aerospace engineers, astronomers and physicists, physicians, and nuclear engineers,) and only 23 with higher than average divorce rates (including electricians, water treatment plant operators, radio and telecommunication installers, broadcast engineers, and similar professions.) The purer sciences obviously had lower rates than the more practical applied tech fields.

The big outliers were mathematicians (19.15%), psychologists (19.26%), and sociologists (23.53%), though I’m not sure they count (if so, there were only 22 professions with higher than average divorce rates.)

I’m not sure which professions count as “jock” or “chad,” but athletes had lower than average rates of divorce (14.05%) as did firefighters, soldiers, and farmers. Financial examiners, hunters, and dancers, (presumably an athletic female occupation) however, had very high rates of divorce.

Medical Daily has an article on Who is Most Likely to Cheat? The Top 9 Jobs Unfaithful People Have (according to survey):

According to the survey recently taken by the “infidelity dating website,” Victoria Milan, individuals working in the finance field, such as brokers, bankers, and analysts, are more likely to cheat than those in any other profession. However, following those in finance comes those in the aviation field, healthcare, business, and sports.

With the exception of healthcare and maybe aviation, these are pretty typical Chad occupations, not STEM.

The Mirror has a similar list of jobs where people are most and least likely to be married. Most likely: Dentist, Chief Executive, Sales Engineer, Physician, Podiatrist, Optometrist, Farm product buyer, Precision grinder, Religious worker, Tool and die maker.

Least likely: Paper-hanger, Drilling machine operator, Knitter textile operator, Forge operator, Mail handler, Science technician, Practical nurse, Social welfare clerk, Winding machine operative, Postal clerk.

I struggled to find data on male fertility by profession/education/IQ, but there’s plenty on female fertility, eg the deceptively titled High-Fliers have more Babies:

…American women without any form of high-school diploma have a fertility rate of 2.24 children. Among women with a high-school diploma the fertility rate falls to 2.09 and for women with some form of college education it drops to 1.78.

However, among women with college degrees, the economists found the fertility rate rises to 1.88 and among women with advanced degrees to 1.96. In 1980 women who had studied for 16 years or more had a fertility rate of just 1.2.

As the economists prosaically explain: “The relationship between fertility and women’s education in the US has recently become U-shaped.”

Here is another article about the difference in fertility rates between high and low-IQ women.

But female fertility and male fertility may not be the same–I recall data elsewhere indicating that high-IQ men have more children than low IQ men, which implies those men are having their children with low-IQ women. (For example, while Bill and Hillary seem about matched on IQ, and have only one child, Melania Trump does not seem as intelligent as Trump, who has five children.)

Amusingly, I did find data on fertility rate by father’s profession for 1920, in the Birth Statistics for the Birth Registration Area of the US:

Of the 1,508,874 children born in 1920 in the birth registration area of the United states, occupations of fathers are stated for … 96.9%… The average number of children ever born to the present wives of these occupied fathers is 3.3 and the average number of children living 2.9.

The average number of children ever born ranges from 4.6 for foremen, overseers, and inspectors engaged in the extraction of minerals to 1.8 for soldiers, sailors, and marines. Both of these extreme averages are easily explained, for soldier, sailors and marines are usually young, while such foremen, overseers, and inspectors are usually in middle life. For many occupations, however, the ages of the fathers are presumably about the same and differences shown indicate real differences in the size of families. For example, the low figure for dentists, (2), architects, (2.1), and artists, sculptors, and teachers of art (2.2) are in striking contrast with the figure for mine operatives (4.3), quarry operatives (4.1) bootblacks, and brick and stone masons (each 3.9). …

As a rule the occupations credited with the highest number of children born are also credited with the highest number of children living, the highest number of children living appearing for foremen, overseers, and inspectors engaged in the extraction of minerals (3.9) and for steam and street railroad foremen and overseer (3.8), while if we exclude groups plainly affected by the age of fathers, the highest number of children living appear for mine and quarry operatives (each 3.6).

Obviously the job market was very different in 1920–no one was majoring in computer science. Perhaps some of those folks who became mine and quarry operatives back then would become engineers today–or perhaps not. Here are the average numbers of surviving children for the most obviously STEM professions (remember average for 1920 was 2.9):

Electricians 2.1, Electrotypers 2.2, telegraph operator 2.2, actors 1.9, chemists 1.8, Inventors 1.8, photographers and physicians 2.1, technical engineers 1.9, veterinarians 2.2.

I don’t know what paper hangers do, but the Mirror said they were among the least likely to be married, and in 1920, they had an average of 3.1 children–above average.

What about athletes? How smart are they?

Athletes Show Huge Gaps on SAT Scores” is not a promising title for the “nerds are athletic” crew.

The Journal-Constitution studied 54 public universities, “including the members of the six major Bowl Championship Series conferences and other schools whose teams finished the 2007-08 season ranked among the football or men’s basketball top 25.”…

  • Football players average 220 points lower on the SAT than their classmates. Men’s basketball was 227 points lower.
  • University of Florida won the prize for biggest gap between football players and the student body, with players scoring 346 points lower than their peers.
  • Georgia Tech had the nation’s best average SAT score for football players, 1028 of a possible 1600, and best average high school GPA, 3.39 of a possible 4.0. But because its student body is apparently very smart, Tech’s football players still scored 315 SAT points lower than their classmates.
  • UCLA, which has won more NCAA championships in all sports than any other school, had the biggest gap between the average SAT scores of athletes in all sports and its overall student body, at 247 points.

From the original article, which no longer seems to be up on the Journal-Constitution website:

All 53 schools for which football SAT scores were available had at least an 88-point gap between team members’ average score and the average for the student body. …

Football players performed 115 points worse on the SAT than male athletes in other sports.

The differences between athletes’ and non-athletes’ SAT scores were less than half as big for women (73 points) as for men (170).

Many schools routinely used a special admissions process to admit athletes who did not meet the normal entrance requirements. … At Georgia, for instance, 73.5 percent of athletes were special admits compared with 6.6 percent of the student body as a whole.

On the other hand, as Discover Magazine discusses in “The Brain: Why Athletes are Geniuses,” athletic tasks–like catching a fly ball or slapping a hockey puck–require exceptionally fast and accurate brain signals to trigger the correct muscle movements.

Ryan Stegal studied the GPAs of highschool student athletes vs. non-athletes and found that the athletes had higher average GPAs than the non-athletes, but he also notes that the athletes were required to meet certain minimum GPA requirements in order to play.

But within athletics, it looks like the smarter athletes perform better than dumber ones, which is why the NFL uses the Wonderlic Intelligence Test:

NFL draft picks have taken the Wonderlic test for years because team owners need to know if their million dollar player has the cognitive skills to be a star on the field.

What does the NFL know about hiring that most companies don’t? They know that regardless of the position, proof of intelligence plays a profound role in the success of every individual on the team. It’s not enough to have physical ability. The coaches understand that players have to be smart and think quickly to succeed on the field, and the closer they are to the ball the smarter they need to be. That’s why, every potential draft pick takes the Wonderlic Personnel Test at the combine to prove he does–or doesn’t—have the brains to win the game. …

The first use of the WPT in the NFL was by Tom Landry of the Dallas Cowboys in the early 70s, who took a scientific approach to finding players. He believed players who could use their minds where it counted had a strategic advantage over the other teams. He was right, and the test has been used at the combine ever since.

For the NFL, years of testing shows that the higher a player scores on the Wonderlic, the more likely he is to be in the starting lineup—for any position. “There is no other reasonable explanation for the difference in test scores between starting players and those that sit on the bench,” Callans says. “Intelligence plays a role in how well they play the game.”

Let’s look at Exercising Intelligence: How Research Shows a Link Between Physical Activity and Smarts:

A large study conducted at the Sahlgrenska Academy and Sahlgrenska University Hospital in Gothenburg, Sweden, reveals that young adults who regularly exercise have higher IQ scores and are more likely to go on to university.

The study was published in the Proceedings of the National Academy of Sciences (PNAS), and involved more than 1.2 million Swedish men. The men were performing military service and were born between the years 1950 and 1976. Both their physical and IQ test scores were reviewed by the research team. …

The researchers also looked at data for twins and determined that primarily environmental factors are responsible for the association between IQ and fitness, and not genetic makeup. “We have also shown that those youngsters who improve their physical fitness between the ages of 15 and 18 increase their cognitive performance.”…

I have seen similar studies before, some involving mice and some, IIRC, the elderly. It appears that exercise is probably good for you.

I have a few more studies I’d like to mention quickly before moving on to discussion.

Here’s Grip Strength and Physical Demand of Previous Occupation in a Well-Functioning Cohort of Chinese Older Adults (h/t prius_1995) found that participants who had previously worked in construction had greater grip strength than former office workers.

Age and Gender-Specific Normative Data of Grip and Pinch Strength in a Healthy Adult Swiss Population (h/t prius_1995).

 

If the nerds are in the sedentary cohort, then they be just as athletic if not more athletic than all of the other cohorts except the heavy work.

However, in Revised normative values for grip strength with the Jamar dynamometer, the authors found no effect of profession on grip strength.

And Isometric muscle strength and anthropometric characteristics of a Chinese sample (h/t prius_1995).

And Pumpkin Person has an interesting post about brain size vs. body size.

 

Discussion: Are nerds real?

Overall, it looks like smarter people are more athletic, more athletic people are smarter, smarter athletes are better athletes, and exercise may make you smarter. For most people, the nerd/jock dichotomy is wrong.

However, there is very little overlap at the very highest end of the athletic and intelligence curves–most college (and thus professional) athletes are less intelligent than the average college student, and most college students are less athletic than the average college (and professional) athlete.

Additionally, while people with STEM degrees make excellent spouses (except for mathematicians, apparently,) their reproductive success is below average: they have sex later than their peers and, as far as the data I’ve been able to find shows, have fewer children.

Stephen Hawking

Even if there is a large overlap between smart people and athletes, they are still separate categories selecting for different things: a cripple can still be a genius, but can’t play football; a dumb person can play sports, but not do well at math. Stephen Hawking can barely move, but he’s still one of the smartest people in the world. So the set of all smart people will always include more “stereotypical nerds” than the set of all athletes, and the set of all athletes will always include more “stereotypical jocks” than the set of all smart people.

In my experience, nerds aren’t socially awkward (aside from their shyness around women.) The myth that they are stems from the fact that they have different interests and communicate in a different way than non-nerds. Let nerds talk to other nerds, and they are perfectly normal, communicative, socially functional people. Put them in a room full of non-nerds, and suddenly the nerds are “awkward.”

Unfortunately, the vast majority of people are not nerds, so many nerds have to spend the majority of their time in the company of lots of people who are very different than themselves. By contrast, very few people of normal IQ and interests ever have to spend time surrounded by the very small population of nerds. If you did put them in a room full of nerds, however, you’d find that suddenly they don’t fit in. The perception that nerds are socially awkward is therefore just normie bias.

Why did the nerd/jock dichotomy become so popular in the 70s? Probably in part because science and technology were really taking off as fields normal people could aspire to major in, man had just landed on the moon and the Intel 4004 was released in 1971.  Very few people went to college or were employed in sciences back in 1920; by 1970, colleges were everywhere and science was booming.

And at the same time, colleges and highschools were ramping up their athletics programs. I’d wager that the average school in the 1800s had neither PE nor athletics of any sort. To find those, you’d probably have to attend private academies like Andover or Exeter. By the 70s, though, schools were taking their athletics programs–even athletic recruitment–seriously.

How strong you felt the dichotomy probably depends on the nature of your school. I have attended schools where all of the students were fairly smart and there was no anti-nerd sentiment, and I have attended schools where my classmates were fiercely anti-nerd and made sure I knew it.

But the dichotomy predates the terminology. Take Superman, first 1938. His disguise is a pair of glasses, because no one can believe that the bookish, mild-mannered, Clark Kent is actually the super-strong Superman. Batman is based on the character of El Zorro, created in 1919. Zorro is an effete, weak, foppish nobleman by day and a dashing, sword-fighting hero of the poor by night. Of course these characters are both smart and athletic, but their disguises only work because others do not expect them to be. As fantasies, the characters are powerful because they provide a vehicle for our own desires: for our everyday normal failings to be just a cover for how secretly amazing we are.

But for the most part, most smart people are perfectly fit, healthy, and coordinated–even the ones who like math.

 

Decompression

So I’ve been doing a long project on crime/criminals. So far I’ve read about pirates, Angola Prison, horseback outlaws, outlaw motorcycle clubs, and currently, the mafia.

The books are good, but this is not light reading. After reading about meth whores abusing their kids for a chapter or two, you find yourself wanting to head over to the nearest church.

And I’ve got two and a half books left to go.

Obviously I don’t like crime. Few people do. I’d like for criminals to go away.

I also don’t want non-criminals accidentally imprisoned for crimes they didn’t commit. I don’t want petty criminals over-punished for minor crimes that don’t warrant it. I don’t want a system where some people have access to good lawyers and a shot at “justice” and some people don’t.

I wish we could talk about crime, and the police, and the justice system, and how all of that should work, and subjects like “do the police shoot people inappropriately?” without getting dragged into the poison of tribal political bickering. I especially don’t like the idea that as a result of people trying to prevent one form of murder (police shootings), far more people have ended up being murdered by common criminals. (At least, that’s what the data looks like.)

Obviously we live in an imperfect world with imperfect people in which there may in fact be a trade off between level of police / justice system violence and level of criminal violence. If you have 10 suspects and you know 5 are serial killers but you don’t know which 5, imprisoning all 10 will get the killers off the streets but also imprison 5 innocents, while freeing all of them will result in a bunch more murders. It would be nice to be perfect, but we’re not. We’re humans.

I think there are a lot of problems with the way the legal/justice system operates, but I don’t see how we’re going to get anywhere with fixing it. People need to be genuinely motivated to make it better, not just tribally interested in taking a side over BLM. And most people really aren’t interested in fixing it.

And then there’s the criminal side. (Oh, and on a related note: Portland Deletes Its Gang List for Having Too Many Blacks)

I’m often reminded of a passage in Sudhir Venkatesh’s Gang Leader for a Day (which I read ages ago) in which he expressed frustration at his fellow academics. You see, Venkatesh was doing street-level, real live research in–I think it was Chicago–by actually going into ghetto neighborhoods and making friends with the people, interacting with them, seeing what their lives were really like. At the same time, Venkatesh was a university student studying “poverty” or something like that, and so would frequently attend lectures by academic types talking about ways to address poverty or fight poverty or what have you, and it was obvious to him that many of these lecturers had no idea what they were talking about.

And really, people do this a lot. They propose a bunch of feel-good solutions to problems they don’t actually understand.

This is pretty much all of politics, really.

I remember a conversation with a well-meaning liberal acquaintance that occurred shortly after I finished Phillipe Bourgeois’s In Search of Respect: Selling Crack in el Barrio. She suggested that better public transportation networks would help poor people get to resources like public museums, which would enrich their lives. I thought this was a stupid response. People trying to make ends meet by acting as lookouts for crack gangs or struggling to find a job after getting out of prison do not care about museums. I said something to that effect, and I don’t think she likes me anymore.

Deep down inside, I wish we lived in a kumbaya-world of happy bunnies frolicking in the forest and children holding hands and singing about how happy they are. I wish people were honest, and pure, and well-intentioned. I wish we could go to the museum, experience beauty, and feel connected to each other and history and culture. I wish none of us had to wear suits and that jobs didn’t grind up people’s souls and spit them out. I wish people could see the humanity in each other, because when we stop seeing that, we stop being human.

And to a large degree, we live in a very nice world. We live in a world with medicines and antibiotics. Where child mortality is low and mothers rarely die in childbirth. Where surgery is done with anesthesia. I have a comfortable home, lots of books, and plenty of food. I spend much of my time reading about times and places where these weren’t the norm, which makes me quite grateful for what I have. It also sometimes keeps me up late at night when I should be asleep.

It’s a good world, but it isn’t kumbaya world. It’s a world with criminals and idiots and mal-intentioned people. It’s a world that got to be good because people worked very hard to make it that way (many people died to make it that way) and it’s a world that doesn’t have to stay that way. We can ruin it.

While researching the previous Cathedral Round-Up, I came across what I think is a professor’s old Myspace page. Suddenly this professor went from “person who wrote really pretentious-sounding dissertation” to “human being.” They were a kid once, trying to figure out their place in this world. They looked sad in some of their pictures. Were they lonely? Outcast? Bullied?

I hate “dissertation language” and hate how simple (sometimes even reasonable) ideas get wrapped up in unnecessarily complex verbiage just to make them sound astonishing. I hate it on principle. I hate how the same people who talk about “privilege” use a writing style that is, itself, accessible to and performed by only an extremely privileged few. Much of it is self-centered drivel, and pretending it has anything to do with uplifting the pure is unadulterated hypocrisy.

All of this internet-driven SJW political signaling is really performative morality. When you are in the context of a real flesh and blood human being in your own community whom you’ll have to interact with repeatedly over the course of years, you’ll try to be faithful, honest, dutiful, loyal, dependable, etc., and you’ll value those some traits in others. Put us on the internet, and we have no need for any of that. We’re not going to cooperate in any meaningful, real-world way with a bunch of people on the internet. Morality on the internet becomes performative, a show you put on for a 3rd-party audience. Here the best thing isn’t to be dependable, but to have the best-sounding opinions. Status isn’t built on your long-term reputation but on your ability to prove that other people are less moral than you.

I noticed years ago that people on the internet often did not debate honestly with each other, but would lie and distort the other person’s argument. Why would they do this? Surely they couldn’t hope to win by lying to someone’s face about their own argument! It only makes sense if you assume the goal of the discussion isn’t to convince the other person, but to convince some other person watching the debate. If you get lots of approval from your adoring Tumblr/Twitter/whatever fans for saying all the right things and accusing your opponents of being all of the wrong, immoral sorts of things, then who cares what the person those remarks are actually directed at thinks of them?

And who cares if you are actually a good, decent, reliable, honest person?

As someone who writes a blog that often discusses other people’s work for the sake of my own audience, I must admit that I, too, am guilty here.

But hey, at least I haven’t put a meathook up anyone’s ass.

So I guess I’ll just end by encouraging everyone to go and be decent people.

Clinton vs. Bush, or Why you should have Multiple Children

Original

As the only heir of the immensely wealthy and powerful Clinton family, Chelsea has been thrust into the public spotlight following her mother’s electoral defeat.

Unfortunately for the Blue Tribe she is supposed to lead, Mrs. Chelsea isn’t too bright. Her Twitter comment was intended as a critique of the claim that Confederate statues and monuments should not be torn down because they symbolize part of America’s history.

Milan Cathedral

This statement depends entirely on churches Chelsea has personally visited. “I have not personally seen this” is a bad argument. All it takes is a few churches she hasn’t attended that happen to have Lucifer statues to disprove her whole point.

And if you know anything about churches, you know that some of them have an awful lot of statues. The Milan Cathedral has 3,400 of them! They cover these things with gargoyles–do you really want to make a political argument that hinges on whether or not there’s a Lucifer in there somewhere?

The Vatican’s new statue

But you don’t have to travel to famous Italian cathedrals to hunt for the Devil; I have a statuette of Satan defeated by Michel the Archangel about ten feet away on my mantle. Do you know how many statues there are of this guy? Both Popes got together in 2013 to consecrate a new statue of him–complete with Lucifer–in the middle of the holy Vatican City.

Satan also shows up in Christian art and iconography in a variety of disguises, such as a Dragon (defeated by St. George) and a serpent (trod upon by the Virgin Mary.)

If we expand our search to include paintings and stained glass, we find almost endless examples, such as the famous Sistine Chapel frescoes (Michelangelo put the Mouth of Hell right over the Pope’s chair, I hear.)

But even if we limit ourselves to freestanding statues solely of Lucifer himself–not of him being defeated or crushed, not in symbolic form nor painted on the walls, we still have this rather cute little Devil seated outside Marienkirche church in Lübeck, Germany; this large and creepy statue of Lucifer tangled in power lines in the Holy Trinity Church in Marylebone, Westminster; a frightening devil from the church of Sainte-Marie-Madeleine de Rennes-le-Château; Satan pushing the damned into the Leviathan’s mouth, a 12th century Romanesque Carving from the Church Sainte Foy, France; Satan again; carving of Satan being cast out of Heaven from Pisa, Italy; the Devil weighing souls and leading away the damned, Notre Dame, France; another from Notre Dame; devils carved into a church in Lincolnshire, England; a little devil in St. Severin, France; Satan on a pillar, Chatellerault, France; statue of the Devil at the Grotto of St. Anthony, Belgium; and for that matter, there are a lot of frankly obscene carvings in older churches.

We could do this all day, but I think you get the point: there are a lot of depictions of the Devil in Christian churches. Having been raised Methodist is no excuse; somewhere between attending Sidwell Friends, Stanford, Oxford, Columbia, etc., Chelsea has surely learned something about European art.

Considering Chelsea’s level of worldliness–one of the privileged glitterati who get to spend their lives drifting from board to board of different companies and exclusive soirees for the rich and famous–you’d expect her to have at least noticed the carvings on a European cathedral or two.

Even Chelsea’s writing career shows few signs of brilliance: she’s written two books for kids (one of those a picture book) and co-authored one for adults, which has–wow–absolutely rock-bottom reviews. Considering her kids’ books got good reviews, I don’t think this is a troll campaign–it looks like her book is actually terrible.

Unfortunately for the increasingly old and decrepit senior Clintons, lack-luster Chelsea is the only egg in their basket: they have no other kids to prop her up or take the limelight for her.

The Bush family in the Red Room of the White House (January 2005).

By contrast, President and first lady George H. W. and Barbara Bush had 6 children–George Walker Bush, Pauline Robinson “Robin” Bush (1949–1953, died of leukemia), John Ellis “Jeb” Bush, Neil Mallon Pierce Bush, Marvin Pierce Bush, and Dorothy Bush Koch.[9] George “W” Bush, although not noted for intellectual excellence, managed to follow in his father’s footsteps and also become President; his brother Jeb was governor of Florida; and Neil and Marvin are doing well for themselves in business.

According to Wikipedia, George and Barbara’s five surviving children have produced 14 grand children (including two adoptees) and 7 great-grandchildren, for a total of 24 living descendants. Chelsea Clinton, while obviously younger, has only 2 children.

Having one child is an effective way to concentrate wealth, but the Bush family, by putting its eggs into more baskets, has given itself more opportunities for exceptional children to rise to prominence and make alliances (marriages, friendships) with other wealthy and powerful families.

The Clintons, by contrast, now have only Chelsea to lead them.

Book Review: Aphrodite and the Rabbis

When I started researching Judaism, the first thing I learned was that I didn’t know anything about Judaism. It turns out that Judaism-in-the-Bible (the one I was familiar with) and modern Judaism are pretty different.

Visotzky’s Aphrodite and The Rabbis: How the Jews Adapted Roman Culture to Create Judaism as we Know it explores the transition from Biblical to Rabbinic Judaism. If you’re looking for an introductory text on the subject, it’s a good place to start. (It doesn’t go into the differences between major modern-day Jewish groups, though. If you’re looking for that, Judaism for Dummies or something along those lines will probably do.)

I discussed several ideas gleaned from this book in my prior post on Talmudism and the Constitution. Visotzky’s thesis is basically that Roman culture (really, Greco-Roman culture) was the dominant culture in the area when the Second Temple fell, and thus was uniquely influential on the development of early Rabbinic Judaism.

Just to recap: Prior to 70 AD, Judaism was located primarily in its homeland of Judea, a Roman province. It was primarily a temple cult–people periodically brought sacrifices to the priests, the priests sacrificed them, and the people went home. There were occasional prophets, but there were no rabbis (though there were pharisees, who were similar.)  The people also practiced something we’ll call for simplicity’s sake folk Judaism, passed down culturally and not always explicitly laid out in the Mosaic Law. (This is a simplification; see the prior post for more details.)

Then there were some big rebellions, which the Romans crushed. They razed the Temple (save for the Western Wall) and eventually drove many of the Jews into exile.

It was in this Greco-Roman cultural environment, Visotzky argues, that then-unpracticeable Temple Judaism was transformed into Rabbinic Judaism.

Visotzky marshals his arguments: Jewish catacombs in Rome, Talmudic stories that reference Greek or Roman figures, Greek fables that show up in Jewish collections, Greek and Roman words that appear in Jewish texts, Greco-Roman style art in synagogues (including a mosaic of Zeus!), the model of the rabbi + his students as similar to the philosopher and his students (eg, Socrates and Plato,) Jewish education as modeled on Greek rhetorical education, and the Passover Seder as a Greek symposium.

Allow me to quote Visotzky on the latter:

The recipe for a successful symposium starts, of course, with wine. At least three cups, preferably more, and ideally you would need between three and five famous guests. Macrobius describes a symposium at which he imagined all the guests drinking together, even though some were already long dead. They eat hors d’oeuvers, which they dip into a briny sauce. Their appetite is whetted by sharp vegetables, radishes, or romaine lettuce. The Greek word for these veggies is karpos. Each food is used as a prompt to dig through one’s memory to find apposite bookish quotes about it. … Above all, guests at a symposium loved to quote Home, the divine Homer. …

To kick off a symposium, a libation was poured to Bacchus Then the dinner guests took their places reclining on pillows, leaning on their left arms, and using their right hands to eat. Of course, they washed their fingers before eating their Mediterranean flatbreads, scooping up meats and poultry–no forks back then.

Athenaeus records a debate about desert, a sweet paste of fruit, wine, and spices. Many think it a nice digestive, but Athenaeus quotes Heracleides of Tarentum, who argues that such a lovely dish ought to be the appetizer, eaten at the outside of the meal. After the sumptuous meal and the endless quotation of texts… the symposium diners sang their hymns of thanksgiving to the gods. …

All of this should seem suspiciously familiar to anyone who has ever attended a Passover Seder. The traditional Seder begins with a cup of wine, and blessings to God are intoned. Then hands are washed in preparation for eating the dipped vegetables, called karpos, the Greek word faithfully transliterated int Hebrew in the Passover Haggadah. Like the symposiasts, Jews dip in brine. The traditional Haggadah recalls who was there at the earliest Seders: Rabbi Eliezer … Rabbi Aqiba, and Rabbi Tarphon (a Hebraized version of the Greek name Tryphon). The converation is prompted by noting the foods that are served and by asking questions whose answers quote sacred scripture. …

Traditionally the Passover banquet is eaten leaning on the left side, on pillows. Appetites are whetted by bitter herbs and then sweetened by the paste-like Haroset (following the opinion of Heracleides of Tarentum?) Seder participants even scoop up food in flatbread. Following the Passover meal there are hymns to God.

Vigotzsky relates one major difference between the Greek and Jewish version: the Greeks ended their symposiums with a “descent into debauchery,” announced api komias–to the comedians! Jews did not:

Indeed, the Mishnah instructs, “We do not end the meal after eating the paschal lamb by departing api komias.” That final phrase, thanks to the Talmud of Jewish Babylonia, where they did not know Greek, has come to be Hebraized as “afi-komen,” the hidden piece of matzo eaten for desert.

The one really important piece of data that he leaves out–perhaps he hasn’t heard the news–is the finding that Ashkenazi Jews are genetically about half Italian. This Italian DNA is primarily on their maternal side–that is, Jewish men, expelled from Judea, ended up in Rome and took local wives. (Incidentally, Visotzky also claims that the tradition of tracing Jewish ancestry along the maternal line instead of the paternal was adopted from the Romans, as it isn’t found in the Bible, but is in Rome.) These Italian ladies didn’t leave behind many stories in the Talmud, but surely they had some effect on the religion.

On the other hand, what about Jews in areas later controlled by Islam, like Maimonides? Was Rome a major influence on him, too? What about the Babylonian Talmud, written more or less in what is now Iraq?

Modern Christianity owes a great deal to Greece and Rome. Should modern Judaism be understood in the Greco-Roman lens, as well?

Mongolia Isn’t Sorry

Genghis Khan killed approximately 40 million people–so many that historians debate whether the massive decrease in agriculture caused by the deaths of so many farmers helped trigger the Little Ice Age. DNA analysis indicates that 1 in 200 people alive today is a direct descendant of Genghis Khan or his immediate male family.

The Genghis Khan Equestrian Statue, erected in 2008 near Ulaanbaatar, Mongolia, stands 130 ft (40 m) tall, its pedestal an entire museum. It is one of the world’s tallest statues–and the tallest equestrian statue–a status it shares primarily with the Buddha and other eastern deities.

Mongolians regard him as the father of their country.

The Talmud and the Constitution

This post is about similarities between the development of Jewish law and American law.

A story is recounted in the Babylonian Talmud, which I am going to paraphrase slightly for clarity:

Rabbi Yehudah said, “Rav (Abba Aricha) said, “When Moses ascended Mount Sinai, up to the heavens, to receive the Biblical law, he found God sitting and adding calligraphic flourishes (crowns) to the letters.

Moses said,”Master of the Universe! Why are you going so slowly? Why aren’t you finished?”

God said to him, “Many generations from now, Akiva the son of Yosef will expound on every calligraphic detail to teach piles and piles of laws.”

Moses said, “Master of the Universe! Show him to me,” so God told him to turn around, and a vision of Rabbi Akiva teaching his students appeared. Moses went and sat in the back row, but the teaching style was so intellectual that he did not understand what they were talking about and got upset.

Then one of the students asked Rabbi Akiva, “Our teacher, where did you learn this law?”

Akiva replied, “It is from a law that was taught to Moses at Sinai.”

So Moses calmed down. He returned and came before the Holy One, Blessed be He, and said before Him, “Master of the Universe! If you have a man like this, why are you are giving the Torah through me?”

But God only replied, “Be silent. This is what I have decided.”””

2,000 years ago, when Yeshua of the house of David still walked the Earth, rabbinic Judaism–the Judaism you’ll find if you walk into any synagogue–did not fully exist.* The Judaism of Roman Judea was a temple cult, centered on the great Temple in Jerusalem (though there were others, in Turkey, Greece, Egypt, and of course, Samaria.) Ordinary Jews went about their business–raising crops, tending goats, building tables, etc–and every so often they visited the Temple, bought or brought an offering, and had the priest sacrifice it.

*Note: See the comments for a discussion of continuity between Pharisaic Judaism and Rabbinic Judaism. I am not arguing that Rabbinic Judaism was invented whole cloth.

69 AD, also known as the Year of the Four Emperors, was particularly bad for the Roman Empire. Galba seized power after Nero‘s suicide, only to be murdered on January 15 in coup led by Otho. Emperor Otho committed suicide on April 16 after losing Battle of Bedriacum to Vitellius. Vitellius was murdered on December 20 by Vespasian‘s troops.

Meanwhile, Judea was in revolt. In 70 AD, Vespasian’s son (and successor) Titus besieged Jerusalem, crushed the rebellion, and razed the Temple.

Without the Temple–and worse, scattered to the winds–what was an ordinary Jew supposed to do? Where could he take his sacrifices? How was he supposed to live in this new land? Could he visit a bath house that had a statue of Aphrodite? Could he eat food that had been sold beside non-kosher meat?

The Bible has 613 laws for Jews to follow, but do you know how many laws you live under?

I once did a research project on the subject. I found that no one knows how many laws there are in the US. We have federal, state, county, and city laws. We have the code of federal regulations, containing thousands of rules created by unelected bureaucrats within dozens of agencies like the EPA, which is enforced exactly like laws. We have thousands of pages of case law handed down by the Supreme Court.

It’s one thing to live in an organic community, following the traditions handed down by your ancestors. Then perhaps 613 laws are enough. But with the destruction of the Temple, Judaism had to adapt. Somehow they had to get a full body of laws out of those measly 613.

Enter the Rabbi Akiva (also spelled Akiba or Aqiba) and his calligraphic flourishes. By examining and re-examining the text, comparing a verse from one section to a similar verse to another, groups of rabbis (teachers) and their students gradually built up a body of laws, first passed down orally (the Oral Torah,) and then written: the Talmud.

For example, the 5th Commandment says to Remember the Sabbath Day, but how, exactly, are you supposed to do it? The Bible says not to “work” (or so we translate it,) but isn’t a rabbi preaching his sermon on Saturday working? To clarify, they look to the next verse, “For in six days the LORD made heaven and earth, the sea, and all that in them is, and rested the seventh day: wherefore the LORD blessed the sabbath day, and hallowed it.” (Exodus 20:11) and declare that “work” here refers to creative work: building, writing, sewing, sowing, reaping, carrying (materials for creative work), building fires, or inversely, putting out fires, knocking down buildings, etc. Merely giving a speech–even if you get paid for it–is not work. (Though you can’t accept the payment on Saturday.)

The word for “work” in the Bible, transliterated as “melachah,” is further interpreted as related to “melekh,” king, relating it back to God (the King)’s work. Melachah is not found very often in the Bible, but shows up again in Exodus 31, during a discussion of the work done to build the Ark of the Covenant [which is not actually a boat] and various related tents–a discussion which is suddenly interrupted for a reminder about the Sabbath. From this, it was reasoned that work specifically mentioned in the first part of the passage was what was prohibited in the second part, and therefore these were among the specific varieties of work forbidden on Shabbat.

If a suitably similar verse could not be found elsewhere in the text to explicate an inadequate passage, rabbis found other ways of decoding God’s “original intent,” including gematria and the aforementioned calligraphic flourishes. Hey, if God wrote it, then God can encode messages in it.

Which gets us back to the story at the beginning of the post. Note how it begins: The Talmud says that Rabbi Yehudah said, “Rav said… ‘Moses said…'” This is a written account of an oral account passed from teacher to student, about a conversation between Moses (recipient of the Torah or first five books of the Bible from God and recipient of the Oral Torah, which was just how everyone lived,) about the transformation from Mosaic Judaism, centered on the Temple and lived tradition, to Rabbinic Judaism, centered on repeated reading and interpretation of the holy text, which contains in it all of the things that used to just be part of everyone’s traditions.

The result, of course, was the Talmud–or rather multiple Talmuds, though the Babylonian is the most commonly cited. The Vilna Edition of the Babylonian Tamud runs 37 volumes, and looks like this:

The inner section is a passage from the original Talmud. The inner margin is Rashi (a famous rabbi)’s commentary, the outer margin is additional commentary from other famous rabbis, and around the edges you can see marginalia from even more rabbis.

Like an onion, it is layer upon layer upon layer.

But what authority do the rabbis have to make pronouncements about the law?

The Talmud recounts an amusing argument about whether an oven could be purified:

The Sages taught: On that day, when they discussed this matter, Rabbi Eliezer answered all possible answers in the world to support his opinion, but the Rabbis did not accept his explanations from him.

After failing to convince the Rabbis logically, Rabbi Eliezer said to them: If the halakha is in accordance with my opinion, this carob tree will prove it. The carob tree was uprooted from its place one hundred cubits, and some say four hundred cubits.

The Rabbis said to him: One does not cite halakhic proof from the carob tree.

Rabbi Eliezer then said to them: If the halakha is in accordance with my opinion, the stream will prove it. The water in the stream turned backward and began flowing in the opposite direction.

They said to him: One does not cite halakhic proof from a stream.

Rabbi Eliezer then said to them: If the halakha is in accordance with my opinion, the walls of the study hall will prove it. The walls of the study hall leaned inward and began to fall.

Rabbi Yehoshua scolded the walls and said to them: If Torah scholars are contending with each other in matters of halakha, what is the nature of your involvement in this dispute?

The Gemara relates: The walls did not fall because of the deference due Rabbi Yehoshua, but they did not straighten because of the deference due Rabbi Eliezer, and they still remain leaning.

Rabbi Eliezer then said to them: If the halakha is in accordance with my opinion, Heaven will prove it.

A Divine Voice emerged from Heaven and said: Why are you differing with Rabbi Eliezer, as the halakha is in accordance with his opinion in every place that he expresses an opinion?

Rabbi Yehoshua stood on his feet and said: It is written: “It is not in heaven” (Deuteronomy 30:12).

The Gemara asks: What is the relevance of the phrase “It is not in heaven” in this context?

Rabbi Yirmeya says: Since the Torah was already given at Mount Sinai, we do not regard a Divine Voice, as You already wrote at Mount Sinai, in the Torah: “After a majority to incline” (Exodus 23:2). Since the majority of Rabbis disagreed with Rabbi Eliezer’s opinion, the halakha is not ruled in accordance with his opinion.

The Gemara relates: Years after, Rabbi Natan encountered Elijah the prophet and said to him: What did the Holy One, Blessed be He, do at that time, when Rabbi Yehoshua issued his declaration?

Elijah said to him: The Holy One, Blessed be He, smiled and said: My children have triumphed over Me; My children have triumphed over Me.

So say the rabbis!

(you might be thinking, “Didn’t Elijah live a long time before the rabbis?” But since Elijah was taken up in whirlwind he never died, and thus may still be encountered.)

The importance of this little bit of Talmudism–in my opinion–is it lets the rabbis modify practice to avoid parts of the Bible that people don’t like anymore, like stoning adulterers. Sure, they do so by legalistically telling God to buzz off, they’re interpreting the law now, but hey, “Israel” means “wrestled with God“:

So Jacob was left alone, and a man wrestled with him till daybreak. … Then the man said, “Let me go, for it is daybreak.”

But Jacob replied, “I will not let you go unless you bless me.” …

Then the man said, “Your name will no longer be Jacob, but Israel,[a] because you have struggled with God and with humans and have overcome.” (Genesis 32: 24-28)

Arguing with God. It’s a Jew thing.

The downside to all of this is that the Talmud is SUPER LONG and gets bogged down in boring legal debates about EVERYTHING.

Every so often, a group of Jews decides that all of this Talmud stuff is really too much and tries to sweep it away, starting fresh with just the Laws of Moses. Karaite Jews, for example, reject the Talmud, claiming instead to derive all of their laws directly from the Bible. They have therefore written several hundred books of their own interpreting Biblical law.

Hasidic Judaism was founded by the Baal Shem Tov, a rabbi who (according to his followers) emphasized the importance of having a “spiritual connection” to God (which even poor Jews could do) over legalistic arguing about texts, (which a rich atheist could do but not a poor man.) Today, Hasidic Jews are prominent among the Orthodox Jews who actually care about extensive, strict interpretation and implementation of Jewish law.

It’s not that reform is worthless–it’s just that the Bible doesn’t contain enough details to use as a complete legal code to govern the lives of people who no longer live in the organic, traditional community that originally produced it. When people lived in that community, they didn’t need explicit instructions about how to build a sukkah or honor the Sabbath day, because their parents taught them how. Illiterate shepherds didn’t need a long book of legal opinions to tell them how to treat their guests or what to do with a lost wallet–they already learned those lessons from their community.

It’s only with the destruction of the Temple and the expulsion of the Jews from Judea that there comes a need for a written legal code explaining how, exactly, everything in the culture is supposed to be done.

Okay, but what does all of this have to do with the Constitution?

As legal documents go, the Constitution is pretty short. Since page size can vary, we’ll look at words: including all of the amendments and signatures, the Constitution is 7,591 words long.

The Affordable Care Act, (aka Obamacare,) clocks in at a whopping 363,086 words, of which 234,812 actually have to do with the law; the rest are headers, tables of contents, and the like. (For comparison, The Fellowship of the Ring only has 177,227 words.)

Interestingly, the US Constitution is both the oldest and shortest constitution of any major government in the world. This is not a coincidence. By contrast, the Indian Constitution, passed in 1949, is 145,000 words long–the longest in the world, but still shorter than the ACA.

People often blame the increasing complexity of US law on Talmudic scholars, but I think we’re actually looking at a case of convergent evolution–the process by which two different, not closely related species develop similar traits in response to similar environments or selective pressures. Aardvarks and echidnas, for example, are not closely related–aardvarks are placental mammals while echidnas lay eggs–but both creatures eat ants, and so have evolved similar looking noses. (Echidnas also look a lot like hedgehogs.)

US law has become more complex for the same reasons Jewish law did: because we no longer live in organic communities where tradition serves as a major guide to proper behavior, for both social and technical reasons. Groups of people whose ancestors were separated by thousands of miles of ocean or desert now interact on a daily basis; new technologies our ancestors could have never imagined are now commonplace. Even homeless people can go to the library, enjoy the air conditioning, log onto a computer, and post something on Facebook that can be read, in turn, by a smartphone-toting Afghan shepherd on the other side of the world.

The result is a confused morass. Groups of people who don’t know how to talk to each other have degenerated into toxic “call-out culture” and draconian speech codes. (Need I remind you that some poor sod just lost his job at Google for expressing views backed by mountains of scientific evidence, just because it offended a bunch of SJWs?) Campus speech codes (which infringe on First Amendment rights) are now so draconian that people are discussing ways to use a different set of laws–the Americans with Disabilities Act–to challenge them.

Even the entry of large numbers of women into colleges and the paid workforce (as opposed to unpaid labor women formerly carried out in homes and farms) has simultaneously removed them from the protective company of male relatives while bringing them into constant contact with male strangers. This has forced a massive shift both in social norms and an increase in legal protections afforded to women, whom the state now protects from harassment, “hostile work environments,” rape, assault, discrimination, etc.

Without tradition to guide us, we try to extrapolate from some common, agreed upon principles–such as those codified in the Constitution. But the Constitution is short; it doesn’t even remotely cover all of the cases we are now trying to use it to justify. What would the founding fathers say about machine guns, nuclear missiles, or international copyright law? The responsibilities of universities toward people with medical disabilities? Medications that induce abortions or unionized factory workers?

The Constitution allows Congress to grant Letters of Marque and Reprisal–that is, to officially commission pirates as privateers, a la Sir Francis Drake, private citizens allowed to attack the boats of (certain) foreign nations. But Letters of Marque and Reprisal haven’t actually been granted since 1815, and the practice has been out of favor among European governments since 1856. Like stoning, privateering just isn’t done anymore, even though it is technically still right there in the Constitution.

By contrast, the Supreme Court recently ruled that the Constitution says that the states have to issue gay marriage licenses. Whether you agree with gay marriage or not, this is some Rabbi Yehoshua, “It is not in heaven,” level reasoning. I’m pretty sure if you raised the Founding Fathers or the authors of the 14th Amendment from the dead and ask their ghosts whether the Constitution mandates gay marriage, they’d look at you like you’d just grown a second head and then call you crazy. Gay sex wasn’t just illegal in every state, it was punishable by execution in several and Thomas Jefferson himself wrote a bill for the state of Virginia which penalized it via castration.

But “living constitution” and all that. A majority of modern Americans think gay marriage should be legal and don’t want to execute or dismember homosexuals, so society finds a way.

It’d be more honest to say, “Hey, we don’t really care what people thought about gay marriage 200+ years ago; we’re going to make a new law that suits our modern interests,” but since the legitimacy of the whole legal edifice is built on authority derived from the Constitution, people feel they must find some way to discover legal novelties in the text.

Like a man trying to fix a broken fence by piling up more wood on it, so American law has become an enormous, burdensome pile of regulation after regulation. Where traditions can be flexible–changing depending on human judgment or in response to new conditions–laws, by nature, are inflexible. Changing them requires passing more laws.

The Talmud may be long, but at least I can eat a bacon cheeseburger on leavened bread on a Saturday during Passover with no fear of going to jail. Even Israelis aren’t significantly restricted by Talmudic law unless they want to be.

By contrast, I can be put in prison for violating the endlessly complex US law. I could spend the next ten pages recounting stories of people fined or imprisoned for absurd and trivial things–bakers fined out of business for declining to bake a gay wedding cake, children’s lemonade stands shut down for lack of proper permits, teenagers imprisoned and branded “sex offenders” for life for having consensual sex with each other. Then there’s the corporate side: 42% of multi-million dollar patent litigation suits that actually go to court (instead of the parties just settling) result in the court declaring that the patent involved should have never been granted in the first place! Corporate law is so complex and lawsuits so easy to bring that it now functions primarily as a way for corporations to try to drive their competitors out of business. Lawsuits are no longer a sign that a company has acted badly or unethically, but merely a “cost of doing business.”

How many businesses never get started because the costs of regulation compliance are too high? How many people never get jobs as a result? How many hours of our lives are sucked away while we fill out tax forms or muddle through insurance paperwork?

Eventually we have to stop piling up wood and start tearing out rotten posts.

 

PS: For more information on the development of Rabbinic Judaism, I recommend Visotzky’s Aphrodite and the Rabbis: How the Jews adapted Roman Culture to Create Judaism as we Know it.

Notes on the Muslim Brotherhood

(I’m pretty much starting from scratch)

Sayyid Qutb lived from 1906 – 1966. He was an Egyptian writer, thinker, and leader of the Muslim Brotherhood. He was executed in 1966 for plotting to assassinate the Egyptian president, Nasser.

The Muslim Brotherhood was founded back in 1928 by Islamic scholar Hassan al-Banna. Its goal is to instill the Quran and the Sunnah as the “sole reference point for … ordering the life of the Muslim family, individual, community … and state”;[13] mottos include “Believers are but Brothers”, “Islam is the Solution”, and “Allah is our objective; the Qur’an is the Constitution; the Prophet is our leader; jihad is our way; death for the sake of Allah is our wish”.[14][15]

As of 2015, the MB was considered a terrorist organization by Bahrain,[7][8] Egypt, Russia, Syria, Saudi Arabia and United Arab Emirates.[9][10][11][12]

The MB’s philosophy is pan-Islamist and it wields power in several countries:

323/354 seats in the Sudanese National Assembly,
74/132 seats in the Palestian Legislature,
69/217 seats in the Tunisian assembly,
39/249 seats in the Afghan House,
46/301 seats in Yemen,
16/146 seats in Mauritania,
40/560 seats in Indonesia
2/40 seats in Bahrain
and 4/325 and 1/128 in Iraq and Lebanon, respectively

In 2012, the MB sponsored the elected political party in Egypt (following the January Revolution in 2011,) but has had some trouble in Egypt since then.

The MB also does charity work, runs hospitals, etc., and is clearly using democratic means to to assemble power.

According to Wikipedia:

As Islamic Modernist beliefs were co-opted by secularist rulers and official `ulama, the Brotherhood has become traditionalist and conservative, “being the only available outlet for those whose religious and cultural sensibilities had been outraged by the impact of Westernisation”.[37] Al-Banna believed the Quran and Sunnah constitute a perfect way of life and social and political organization that God has set out for man. Islamic governments must be based on this system and eventually unified in a Caliphate. The Muslim Brotherhood’s goal, as stated by its founder al-Banna was to drive out British colonial and other Western influences, reclaim Islam’s manifest destiny—an empire, stretching from Spain to Indonesia.[38] The Brotherhood preaches that Islam will bring social justice, the eradication of poverty, corruption and sinful behavior, and political freedom (to the extent allowed by the laws of Islam).

Back to Qutb:

In the early 1940s, he encountered the work of Nobel Prize-winner FrencheugenicistAlexis Carrel, who would have a seminal and lasting influence on his criticism of Western civilization, as “instead of liberating man, as the post-Enlightenment narrative claimed, he believed that Western modernity enmeshed people in spiritually numbing networks of control and discipline, and that rather than build caring communities, it cultivated attitudes of selfish individualism. Qutb regarded Carrel as a rare sort of Western thinker, one who understood that his civilization “depreciated humanity” by honouring the “machine” over the “spirit and soul” (al-nafs wa al-ruh). He saw Carrel’s critique, coming as it did from within the enemy camp, as providing his discourse with an added measure of legitimacy.”[24]

From 1948 to 1950, he went to the United States on a scholarship to study its educational system, spending several months at Colorado State College of Education (now the University of Northern Colorado) in Greeley, Colorado. …

Over two years, he worked and studied at Wilson Teachers’ College in Washington, D.C. (one of the precursors to today’s University of the District of Columbia), Colorado State College for Education in Greeley, and Stanford University.[30] He visited the major cities of the United States and spent time in Europe on his journey home. …

On his return to Egypt, Qutb published “The America that I Have Seen”, where he became explicitly critical of things he had observed in the United States, eventually encapsulating the West more generally: its materialism, individual freedoms, economic system, racism, brutal boxing matches, “poor” haircuts,[5] superficiality in conversations and friendships,[32] restrictions on divorce, enthusiasm for sports, lack of artistic feeling,[32] “animal-like” mixing of the sexes (which “went on even in churches”),[33] and strong support for the new Israeli state.[34] Hisham Sabrin, noted that:

“As a brown person in Greeley, Colorado in the late 1940’s studying English he came across much prejudice. He was appalled by what he perceived as loose sexual openness of American men and women (a far cry from his home of Musha, Asyut). This American experience was for him a fine-tuning of his Islamic identity.”…

Qutb concluded that major aspects of American life were primitive and “shocking”, a people who were “numb to faith in religion, faith in art, and faith in spiritual values altogether”. His experience in the U.S. is believed to have formed in part the impetus for his rejection of Western values and his move towards Islamism upon returning to Egypt.

The man has a point. American art has a lot of Jackson Pollock and Andy Warhol schtick.

In 1952, the Egyptian monarchy–which was pro-western–was overthrown by nationalists (?) like Nasser. At first Nasser and Qutb worked together, but there was something of a power struggle and Qutb didn’t approve of Nasser organizing the new Egypt along essentially secular lines instead of Islamic ideology, at which point Qutb tried to have Nasser assassinated and Nasser had Qutb arrested, tortured, and eventually hung.

Aside from the fact that Qutb is Egyptian and Muslim, he and the alt-right have a fair amount in common. (Read his Wikipedia Page if you don’t see what I mean.) The basic critique that the West is immoral, degenerate, has bad art, bad manners, and that capitalism has created a “spiritually numbing” network of control (your boss, office dress codes, the HOA, paperwork), and a return to spirituality (not rejecting science, but enhancing it,) can fix these things.

Unfortunately, the ideology has some bad side effects. His brother, Muhammad Qutb, moved to Saudi Arabia after his release from Egyptian prison and became a professor of Islamic Studies,[96][97] where he promoted Sayyid Qutb’s work. One of Muhammad Qutb’s students/followers was Ayman Zawahiri, who become a member of the Egyptian Islamic Jihad[98] and mentor of Osama bin Laden.

Soraya, empress of Iran, (1953) has no interest in Islamic veiling rules

My impression–Muslim monarchs tend to be secular modernists. They see the tech other countries have (especially bombs) and want it. They see the GDPs other countries have, and want that, too. They’re not that interested in religion (which would limit their behavior) and not that interested in nationalism (as they tend to rule over a variety of different “nations.”) Many monarchs are (or were) quite friendly to the West. The King of Jordan and Shah of Iran come immediately to mind.

(I once met the Director of the CIA. He had a photograph of the King of Jordan in his office. Motioning to the photo, he told me the King was one of America’s friends.)

But modernization isn’t easy. People who have hundreds or thousands of years’ experience living a particular lifestyle are suddenly told to go live a different lifestyle, and aren’t sure how to react. The traditional lifestyle gave people meaning, but the modern lifestyle gives people TV and low infant mortality.

That’s the situation we’re all facing, really.

So what’s a society to do? Sometimes they keep their kings. Sometimes they overthrow them. Then what? You can go nationalist–like Nasser. Communist–like South Yemen. (Though I’m not sure Yemen had a king.) Or Islamic, like Iran. (As far as I can tell, the Iranian revolution had a significant communist element, but the Islamic won out.) The Iranian revolution is in no danger of spreading, though, because the Iranians practice a variety of Islam that’s a rare minority everywhere else in the world.

I hear the Saudis and certain other monarchs have stayed in power so far by using their oil money to keep everyone comfortable (staving off the stresses of modernization) and enforcing Islamic law (keeping the social system familiar.) We’ll see how long this lasts.

So one of the oddities of the Middle East is that while other parts of the world have become more liberal, it appears to have become less. You can find many Before-and-After pictures of places like Iran, where women used to mingle with men, unveiled, in Western-style dress. (In fact, I think the veil was illegal in Iran in the 50s.) War-torn Afghanistan is an even sadder case.

Mohammad Zahir Shah was king of Afghanistan from 1933 through 1973. According to Wikipedia:

“After the end of the Second World War, Zahir Shah recognised the need for the modernisation of Afghanistan and recruited a number of foreign advisers to assist with the process.[12] During this period Afghanistan’s first modern university was founded.[12]… despite the factionalism and political infighting a new constitution was introduced during 1964 which made Afghanistan a modern democratic state by introducing free elections, a parliament, civil rights, women’s rights and universal suffrage.[12]

Mohammad Zahir Shah and his wife, Queen Humaira Begum, visiting JFK at the White House, 1963
credit “Robert Knudsen. White House Photographs. John F. Kennedy Presidential Library and Museum, Boston”

While he was in Italy (undergoing eye surgery and treatment for lumbago,) his cousin executed a coup and instituted a republican government. As we all know, Afghanistan has gone nowhere but up since then.

Zahir Shah returned to Afghanistan in 2002, after the US drove out the Taliban, where he received the title “Father of the Nation” but did not resume duties as monarch. He died in 2007.

His eldest daughter (Princess of Afghanistan?) is Bilqis Begum–Bilqis is the Queen of Sheba’s Islamic name–but she doesn’t have a Wikipedia page. The heir apparent is Ahmad Shah Khan, if you’re looking for someone to crown.

Back to the Muslim Brotherhood.

One of the big differences between elites and commoners is that commoners tend to be far more conservatives than elites. Elites think a world in which they can jet off to Italy for medical treatment sounds awesome, while commoners think this is going to put the local village medic out of a job. Or as the world learned last November, America’s upper and lower classes have very different ideas about borders, globalization, and who should be president.

Similarly, the Muslim Brotherhood seems perfectly happy to use democratic means to come to power where it can.

(The MB apparently does a lot of charity work, which is part of why it is popular.)

The relationship between the MB an Saudi Arabia is interesting. After Egypt cracked down on the MB, thousands of members went to Saudi Arabia. SA needed teachers, and many of the MB were teachers, so it seemed mutually beneficial. The MB thus took over the Saudi educational system, and probably large chunks of their bureaucracy.

Relations soured between SA and the MB due to SA’s decision to let the US base troops there for its war against Iraq, and due to the MB’s involvement in the Arab Spring and active role in Egypt’s democracy–Saudi monarchs aren’t too keen on democracy. In 2014, SA declared the MB a “terrorist organization.”

Lots of people say the MB is a terrorist org, but I’m not sure how that distinguishes them from a whole bunch of other groups in the Middle East. I can’t tell what links the MB has (if any) to ISIS. (While both groups have similar-sounding goals, it’s entirely possible for two different groups to both want to establish an Islamic Caliphate.)

The MB reminds me of the Protestant Reformation, with its emphasis on returning to the Bible as the sole sources of religious wisdom, the establishment of Puritan theocracies, and a couple hundred years of Catholic/Protestant warfare. I blame the Protestant Revolution on the spread of the printing press in Europe, without which the whole idea of reading the Bible for yourself would have been nonsense. I wager something similar happened recently in the Middle East, with cheap copies of the Quran and other religious (and political) texts becoming widely available.

I’ll have to read up on the spread of (cheap) printing in the Islamic world, but a quick search turns up Ami Ayalon’s The Arabic Print Revolution: Cultural Production and Mass Readership:

so that looks like a yes.

What is Cultural Appropriation?

White person offended at the Japanese on behalf of Mexicans, who actually think Mario in a sombrero is awesome

“Cultural appropriation” means “This is mine! I hate you! Don’t touch my stuff!”

Cultural appropriation is one of those newspeak buzz-phrases that sound vaguely like real things, but upon any kind of inspection, completely fall apart. Wikipedia defines Cultural Appropriation as “the adoption or use of the elements of one culture by members of another culture.[1]”, but this is obviously incorrect. By this definition, Louis Armstrong committed cultural appropriation when he learned to play the white man’s trumpet. So is an immigrant who moves to the US and learns English.

Obviously this not what anyone means by cultural appropriation–this is just cultural diffusion, a completely natural, useful, and nearly unstoppable part of life.

A more nuanced definition is that cultural appropriation is “when someone from a more powerful group starts using an element of a less powerful group’s culture.” The idea is that this is somehow harmful to the people of the weaker culture, or at least highly distasteful.

To make an analogy: Let’s suppose you were a total nerd in school. The jocks called you names, locked you in your locker, and stole your lunch money. You were also a huge Heavy Metal fan, for which you were also mocked. The jocks even tried to get the Student Council to pass laws against playing heavy metal at the school dance.

And then one day, the biggest jock in the school shows up wearing a “Me-Tallica” shirt, and suddenly “Me-Tallica” becomes the big new thing among all of the popular kids. Demand skyrockets for tickets to heavy metal concerts, and now you can’t afford to go see your favorite band.

You are about to go apoplectic: “Mine!” you want to yell. “That’s my thing! And it’s pronounced Meh-tallica, you idiots!”

SJWs protest Japanese women sharing Japanese culture with non-Japanese. The sign reads “It wouldn’t be so bad w/out white institutions condoning erasure of the Japanese narrative + orientalism which in turn supports dewomaning + fetishizing AAPI + it is killing us”

How many cases of claimed cultural appropriation does this scenario actually fit? It requires meeting three criteria to count: a group must be widely discriminated against, its culture must be oppressed or denigrated, and then that same culture must be adopted by the oppressors. This is the minimal definition; a more difficult to prove definition requires some actual harm to the oppressed group.

Thing is, there is not a whole lot of official oppression going on in America these days. Segregation ended around the 60s. I’m not sure when the program of forced Native American assimilation via boarding schools ended, but it looks like conditions improved around 1930 and by 1970 the government was actively improving the schools. Japanese and German internment ended with World War II.

It is rather hard to prove oppression–much less cultural oppression–after the 70s. No one is trying to wipe out Native American languages or religious beliefs; there are no laws against rap music or dreadlocks. It’s even harder to prove oppression for recent arrivals whose ancestors didn’t live here during segregation, like most of our Asians and Hispanics (America was about 88% non-Hispanic white and 10% black prior to the 1965 Immigration Act.)

So instead, in cases like the anti-Kimono Wednesdays protest photo above–the claim is inverted:

It wouldn’t be so bad w/out white institutions condoning erasure of the Japanese narrative + orientalism which in turn supports dewomaning + fetishizing AAPI + it is killing us

SJWs objected to Japanese women sharing kimonos with non-Japanese women not because of a history of harm to Japanese people or culture, but because sharing of the kimonos itself is supposedly inspiring harm.

“Orientalism” is one of those words that you probably haven’t encounter unless you’ve had to read Edward Said’s book on the subject (I had to read it twice.) It’s a pretty meaningless concept to Americans, because unlike Monet, we never really went through an Oriental-fascination phase. For good or ill, we just aren’t very interested in learning about non-Americans.

The claim that orientalism is somehow killing Asian American women is strange–are there really serial killers who target Asian ladies specifically because they have a thing for Madame Butterfly?–but at least suggests a verifiable fact: are Asian women disproportionately murdered?

Of course, if you know anything about crime stats, you know that homicide victims tend to be male and most crime is intraracial, not interracial. For example, according to the FBI, of the 12,664 people murdered in 2011, 9,829 were men–about 78%. The FBI’s racial data is only broken down into White (5,825 victims,) Black (6,329,) Other (335), and Unknown (175)–there just aren’t enough Asian homicide victims to count them separately. For women specifically, the number of Other Race victims is only 110–or just a smidge under 1% of total homicides.

And even these numbers are over-estimating the plight of Asian Americans, as Other also includes non-Asians like Native Americans (whose homicide rates are probably much more concerning.)

Call me crazy, but I don’t think kimono-inspired homicides are a real concern.

Kylie Jenner Accused of Cultural Appropriation for Camo Bikini Ad

In practice, SJWs define cultural appropriation as “any time white people use an element from a non-white group’s culture”–or in the recent Kylie Jenner bikini case, “culture” can be expanded to “anything that a person from that other culture ever did, even if millions of other people from other cultures have also done that same thing.” (My best friend in highschool wore camo to prom. My dad wore camo to Vietnam.) And fashion trends come and go–even if Destiny’s Child created a camo bikini trend 16 yeas ago, the trend did not last. Someone else can come along and start a new camo bikini trend.

(Note how TeenVogue does not come to Kyle’s defense by pointing out that these accusations are fundamentally untrue. Anyone can make random, untrue accusations about famous people–schizophrenics do it all the time–but such accusations are not normally considered newsworthy.)

“Cultural appropriation” is such a poorly defined mish-mash of ideas precisely because it isn’t an idea. It’s just an emotion: This is mine, not yours. I hate you and you can’t have it. When white people use the phrase, it takes on a secondary meaning: I am a better white person than you.