In his post on the Chamber of Guf, Slate Star Codex discussed a slate of psychiatric conditions where the sufferer becomes obsessed with not sinning in some particular way. In homosexual OCD, for example, the sufferer becomes obsessed with fear that they are homosexual or might have homosexual thoughts despite not actually being gay; people with incest OCD become paranoid that they might have incestuous thoughts, etc. Notice that in order to be defined as OCD, the sufferers have to not actually be gay or interested in sex with their relatives–this is paranoia about a non-existent transgression. Scott also notes that homosexual OCD is less common among people who don’t think of homosexuality as a sin, but these folks have other paranoias instead.
The “angel” in this metaphor is the selection process by which the brain decides which thoughts, out of the thousands we have each day, to focus on and amplify; “Guf” is the store of all available thoughts. Quoting Scott:
I studied under a professor who was an expert in these conditions. Her theory centered around the question of why angels would select some thoughts from the Guf over others to lift into consciousness. Variables like truth-value, relevance, and interestingness play important roles. But the exact balance depends on our mood. Anxiety is a global prior in favor of extracting fear-related thoughts from the Guf. Presumably everybody’s brain dedicates a neuron or two to thoughts like “a robber could break into my house right now and shoot me”. But most people’s Selecting Angels don’t find them worth bringing into the light of consciousness. Anxiety changes the angel’s orders: have a bias towards selecting thoughts that involve fearful situations and how to prepare for them. A person with an anxiety disorder, or a recent adrenaline injection, or whatever, will absolutely start thinking about robbers, even if they consciously know it’s an irrelevant concern.
In a few unlucky people with a lot of anxiety, the angel decides that a thought provoking any strong emotion is sufficient reason to raise the thought to consciousness. Now the Gay OCD trap is sprung. One day the angel randomly scoops up the thought “I am gay” and hands it to the patient’s consciousness. The patient notices the thought “I am gay”, and falsely interprets it as evidence that they’re actually gay, causing fear and disgust and self-doubt. The angel notices this thought produced a lot of emotion and occupied consciousness for a long time – a success! That was such a good choice of thought! It must have been so relevant! It decides to stick with this strategy of using the “I am gay” thought from now on. …
Politics has largely replaced religion for how most people think of “sin,” and modern memetic structures seem extremely well designed to amplify political sin-based paranoia, as articles like “Is your dog’s Halloween costume racist?” get lots of profitable clicks and get shared widely across social media platforms, whether by fans or opponents of the article.
Both religions and political systems have an interest in promoting such concerns, since they also sell the cures–forgiveness and salvation for the religious; economic and social policies for the political. This works best if it targets a very common subset of thoughts, like sexual attraction or dislike of random strangers, because you really can’t prevent all such thoughts, no matter how hard you try.
Personal OCD is bad enough; a religious sufferer obsessed with their own moralistic sin may feel compelled to retreat to a monastery or wall themselves up to avoid temptation. If a whole society becomes obsessed, though, widespread paranoia and social control may result. (Society can probably be modeled as a meta-brain.)
I propose that our society, due to its memetic structure, is undergoing OCD-inducing paranoia spirals where the voices of the most paranoid are being allowed to set political and moral directions. Using racism as an example, it works something like this:
First, we have what I’ll call the Aristotelian Mean State: an appropriate, healthy level of in-group preference that people would not normally call “racism.” This Mean State is characterized by liking and appreciating one’s own culture, generally preferring it to others, but admitting that your culture isn’t perfect and other cultures have good points, too.
Deviating too far from this mean is generally considered sinful–in one direction, we get “My culture is the best and all other cultures should die,” and too far in the other, “All other cultures are best and my culture should die.” One of these is called “racism,” the other “treason.”
When people get Racism OCD, they become paranoid that even innocuous or innocent things–like dog costumes–could be a sign of racism. In this state, people worry about even normal, healthy expressions of ethnic pride, just as a person with homosexual OCD worries about completely normal appreciation of athleticism or admiration of a friend’s accomplishments.
Our culture then amplifies such worries by channeling them through Tumblr and other social media platforms where the argument “What do you mean you’re not against racism?” does wonders to break down resistance and convince everyone that normal, healthy ethnic feelings are abnormal, pathological racism and that sin is everywhere, you must constantly interrogate yourself for sin, you must constantly learn and try harder not to be racist, etc. There is always some new area of life that a Tumblrista can discover is secretly sinful, though you never realized it before, spiraling people into new arenas of self-doubt and paranoia.
As for the rest of the internet, those not predisposed toward Racism OCD are probably predisposed toward Anti-Racism OCD. Just as people with Racism OCD see racism everywhere, folks with Anti-Racism OCD see anti-racism everywhere. These folks think that even normal, healthy levels of not wanting to massacre the outgroup is pathological treason. (This is probably synonymous with Treason OCD, but is currently in a dynamic relationship with the perception that anti-racists are everywhere.)
Since there are over 300 million people in the US alone–not to mention 7 billion in the world–you can always find some case to justify paranoia. You can find people who say they merely have a healthy appreciation for their own culture but really do have murderous attitudes toward the out-group–something the out-group, at least, has good reason to worry about. You can find people who say they have a healthy attitude toward their own group, but still act in ways that could get everyone killed. You can find explicit racists and explicit traitors, and you can find lots of people with amplified, paranoid fears of both.
These two paranoid groups, in turn, can feed off each other, each pointing at the the other and screaming that everyone trying to promote “moderatism” is actually the worst sinners of the other side in disguise and therefore moderatism itself is evil. This feedback loop gives us things like the “It’s okay to be white” posters, which manages to make an entirely innocuous statement sound controversial due to our conviction that people only make innocuous statements because they are trying to make the other guy sound like a paranoid jerk who disputes innocuous statements.
Racism isn’t the only sin devolving into OCD–we can also propose Rape OCD, where people become paranoid about behaviors like flirting, kissing, or even thinking about women. There are probably other OCDs (trans OCD? food contamination OCD) but these are the big ones coming to mind right now.
Thankfully, Scott also proposes that awareness of our own psychology may allow us to recognize and moderate ourselves:
All of these can be treated with the same medications that treat normal OCD. But there’s an additional important step of explaining exactly this theory to the patient, so that they know that not only are they not gay/a pedophile/racist, but it’s actually their strong commitment to being against homosexuality/pedophilia/racism which is making them have these thoughts. This makes the thoughts provoke less strong emotion and can itself help reduce the frequency of obsessions. Even if it doesn’t do that, it’s at least comforting for most people.
The question, then, is how do we stop our national neuroses from causing disasters?
Do people eventually grow ideologically resistant to dangerous local memes, but remain susceptible to foreign memes, allowing them to spread like invasive species?
And if so, can we find some way to memetically vaccinate ourselves against deadly ideas?
Memetics is the study of how ideas (“memes”) spread and evolve, using evolutionary theory and epidemiology as models. A “viral meme” is one that spreads swiftly through society, “infecting” minds as it goes.
Of course, most memes are fairly innocent (e.g. fashion trends) or even beneficial (“wash your hands before eating to prevent disease transmission”), but some ideas, like communism, kill people.
Ideologies consist of a big set of related ideas rather than a single one, so let’s call them memeplexes.
Almost all ideological memeplexes (and religions) sound great on paper–they have to, because that’s how they spread–but they are much more variable in actual practice.
Any idea that causes its believers to suffer is unlikely to persist–at the very least, because its believers die off.
Over time, in places where people have been exposed to ideological memeplexes, their worst aspects become known and people may learn to avoid them; the memeplexes themselves can evolve to be less harmful.
Over in epidemiology, diseases humans have been exposed to for a long time become less virulent as humans become adapted to them. Chickenpox, for example, is a fairly mild disease that kills few people because the virus has been infecting people for as long as people have been around (the ancestral Varicella-Zoster virus evolved approximately 65 million years ago and has been infecting animals ever since). Rather than kill you, chickenpox prefers to enter your nerves and go dormant for decades, reemerging later as shingles, ready to infect new people.
By contrast, smallpox (Variola major and Variola minor) probably evolved from a rodent-infecting virus about 16,000 to 68,000 years ago. That’s a big range, but either way, it’s much more recent than chickenpox. Smallpox made its first major impact on the historical record around the third century BC, Egypt, and thereafter became a recurring plague in Africa and Eurasia. Note that unlike chickenpox, which is old enough to have spread throughout the world with humanity, smallpox emerged long after major population splits occurred–like part of the Asian clade splitting off and heading into the Americas.
By 1400, Europeans had developed some immunity to smallpox (due to those who didn’t have any immunity dying), but when Columbus landed in the New World, folks here had had never seen the disease before–and thus had no immunity. Diseases like smallpox and measles ripped through native communities, killing approximately 90% of the New World population.
If we extend this metaphor back to ideas–if people have been exposed to an ideology for a long time, they are more likely to have developed immunity to it or the ideology to have adapted to be relatively less harmful than it initially was. For example, the Protestant Reformation and subsequent Catholic counter-reformation triggered a series of European wars that killed 10 million people, but today Catholics and Protestants manage to live in the same countries without killing each other. New religions are much more likely to lead all of their followers in a mass suicide than old, established religions; countries that have just undergone a political revolution are much more likely to kill off large numbers of their citizens than ones that haven’t.
This is not to say that old ideas are perfect and never harmful–chickenpox still kills people and is not a fun disease–but that any bad aspects are likely to become more mild over time as people wise up to bad ideas, (certain caveats applying).
But this process only works for ideas that have been around for a long time. What about new ideas?
You can’t stop new ideas. Technology is always changing. The world is changing, and it requires new ideas to operate. When these new ideas arrive, even terrible ones can spread like wildfire because people have no memetic antibodies to resist them. New memes, in short, are like invasive memetic species.
In the late 1960s, 15 million people still caught smallpox every year. In 1980, it was declared officially eradicated–not one case had been seen since 1977, due to a massive, world-wide vaccination campaign.
Humans can acquire immunity to disease in two main ways. The slow way is everyone who isn’t immune dying; everyone left alive happens to have adaptations that let them not die, which they can pass on to their children. As with chickenpox, over generations, the disease becomes less severe because humans become successively more adapted to it.
The fast way is to catch a disease, produce antibodies that recognize and can fight it off, and thereafter enjoy immunity. This, of course, assumes that you survive the disease.
Vaccination works by teaching body’s immune system to recognize a disease without infecting it with a full-strength germ, using a weakened or harmless version of the germ, instead. Early on, weakened germs from actual smallpox scabs or lesions to inoculate people, a risky method since the germs often weren’t that weak. Later, people discovered that cowpox was similar enough to smallpox that its antibodies could also fight smallpox, but cowpox itself was too adapted to cattle hosts to seriously harm humans. (Today I believe the vaccine uses a different weakened virus, but the principle is the same.)
The good part about memes is that you do not actually have to inject a physical substance into your body in order to learn about them.
Ideologies are very difficult to evaluate in the abstract, because, as mentioned, they are all optimized to sound good on paper. It’s their actual effects we are interested in.
So if we want to learn whether an idea is good or not, it’s probably best not to learn about it by merely reading books written by its advocates. Talk to people in places where the ideas have already been tried and learn from their experiences. If those people tell you this ideology causes mass suffering and they hate it, drop it like a hot potato. If those people are practicing an “impure” version of the ideology, it’s probably an improvement over the original.
For example, “communism” as practiced in China today is quite different from “communism” as practiced there 50 years ago–so much so that the modern system really isn’t communism at all. There was never, to my knowledge, an official changeover from one system to another, just a gradual accretion of improvements. This speaks strongly against communism as an ideology, since no country has managed to be successful by moving toward ideological communist purity, only by moving away from it–though they may still find it useful to retain some of communism’s original ideas.
I think there is a similar dynamic occurring in many Islamic countries. Islam is a relatively old religion that has had time to adapt to local conditions in many different parts of the world. For example, in Morocco, where the climate is more favorable to raising pigs than in other parts of the Islamic world, the taboo against pigs isn’t as strongly observed. The burka is not an Islamic universal, but characteristic of central Asia (the similar niqab is from Yemen). Islamic head coverings vary by culture–such as this kurhars, traditionally worn by unmarried women in Ingushetia, north of the Caucuses, or this cap, popular in Xianjiang. Turkey has laws officially restricting burkas in some areas, and Syria discourages even hijabs. Women in Iran did not go heavily veiled prior to the Iranian Revolution. So the insistence on extensive veiling in many Islamic communities (like the territory conquered by ISIS) is not a continuation of old traditions, but the imposition of a new, idealized, version of Islam.
Purity is counter to practicality.
Of course, this approach is hampered by the fact that what works in one place, time, and community may not work in a different one. Tilling your fields one way works in Europe, and tilling them a different way works in Papua New Guinea. But extrapolating from what works is at least a good start.
I woke up this morning with the realization that I needed to make a meme about Nongqawuse. (Context.)
These were the result:
In 1997, 39 members of the Heaven’s Gate cult committed suicide in order to reach a UFO they believed was accompanying comet Hale-Bopp.
In 1978, 918 followers of cult leader Jim Jones committed suicide by drinking poisoned Kool-Aid–the origin of the phrase, “Don’t drink the Kool-Aid.”
Mathematician Ted Kaczynski, unable to find a publisher for his manifesto, Industrial Society And Its Future, turned to mailing bombs to professors.
82 Branch Davidians, led by David Koresh, died when their compound burned down during a raid by the ATF. It appears that the Branch Davidians set the fire themselves.
The Thugs were an Indian cult that ritually strangled and murdered travelers.
Timothy McVeigh killed 168 people in 1995 when he bombed the Alfred P. Murrah Federal Building in Oklahoma City, in what he claimed was revenge for ATF’s siege against the Branch Davidians.
Hong Xiuquan claimed to be Jesus’ little brother and lead the Taiping Rebellion, which resulted in the deaths of 20-30 million people.
Lee Harvey Oswald
Nongqawuse was a Xhosa prophet who convinced her people that if they sacrificed all of their cattle, the British would be “swept into the sea.” The Xhosa sacrificed their cattle, the British did not get swept into the sea, and mass famine resulted.
Charles Manson was a cult leader whose followers carried out 9 murders in the 70s.
Liberal Christian denominations (ie, Mainline Protestants) are caught in a paradox: even though they have increasingly defined themselves as open to everyone, their membership roles keep decreasing. It’s as if the more people they let in, the fewer people show up.
[insert Groucho Marx cartoon about not wanting to belong to the set of all clubs that would have him.]
Mainline Protestant churches have been hit the hardest. The Evangelical Lutheran Church in America (ELCA) in Minnesota has lost almost 200,000 members since 2000 and about 150 churches. A third of the remaining 1,050 churches have fewer than 50 members. The United Methodist Church, the second largest Protestant denomination in Minnesota, has shuttered 65 churches since 2000.
Catholic membership statewide has held steady, but the number of churches fell from 720 in 2000 to 639 last year, according to official Catholic directories.”
Note the timeframe: we’re not talking about change over the course of a century. The Presbyterian church of Minnesota has lost 42% of its members since 2000.
Meanwhile, membership is basically holding steady at conservative denominations that practically define themselves by whom they don’t let in. Evangelicals and fundamentalists are not hemorrhaging nearly as badly as their more welcoming brethren.
Among Mainline Protestants, the only denomination that’s basically holding steady is the American Baptist Church, which has gained black souls as it has lost white ones.
The African Methodist Episcopal Church has more than doubled in size.
Interestingly, a conservative spin-off of the Presbyterian church is doing fine, and the notorious Southern Baptists are doing fine. [source for denomination data.]
The Amish, who are practically their own ethnic group due to only marrying other Amish, have been nearly doubling their population every 20 years, and that’s even with a significant number of children leaving each generation. Of course, the Amish have plenty of children.
Of course, one of the biggest factors in the decline of liberal denominations is fertility–the Amish have a lot more kids than Mainline Protestants.
But why have the Mainlines, with their open and tolerant ideologies and welcoming attitude toward nearly everyone, not attracted more members as society in general has moved leftward on many issues? If you have read Dumbing of Age for as long as I have, then you are well aware of the main character, Joyce’s, rejection of the particular brand of conservative Christianity she was raised and homeschooled in over the issue of homosexuality, and her subsequent search for a more liberal church (which has so far involved freaking out at an Episcopalian service because it smacked of papistry.)
Why are Presbyterians failing to attract the Joyces of the world?
I propose this is because functionally religious identity is about group identity, and a group identity that hinges on “openness to outsiders” is not a functional group identity.
Now you might be saying, “Wait, I thought religious identity had to do with what you think God, or ethics, or how the world was created. People give some sort of rational thought to their beliefs, and then pick the church that best suits them.”
No. I don’t think anyone ever said, “Hey, the religion where you can’t eat pigs sounds much more rational than the religion where you can’t eat cows.” Nor did anyone logically think that the religions with animal sacrifice sounded more logical than the one where the feces of priests are holy, or where alien ghosts are causing all of your problems. (Basically, every religion that isn’t whatever you happen to practice is full of totally illogical beliefs.)
This is why conversations between atheists and theists are so boring. Atheists try to explain that religion doesn’t make sense, and theists try to explain that religion is about faith, not logic.
The nation of Pakistan is 96.4% Muslim, and it didn’t get that way because everyone in Pakistan spontaneously decided when they were about 16 years old that they all agreed that Islam was the only true religion. Israel is 74.7% Jewish, not because all of the Jews logically examined all of the world’s religion and then spontaneously agreed that Judaism was the best one. No; most of the world’s Muslims are Muslim because their parents were Muslim. Most of the world’s Jews were born to other Jews. Most Christians were born to Christians, and so on.
Multi-religious states exist, but within those states, people tend to marry within their own religion or abandon religion altogether, for religion is ethnicity.
3,000 years ago, this would have been an unexceptional statement. The People of the Crocodile God worshiped crocodiles and were certain those folks over there worshiped the Snake God were up to no good. Note that they didn’t deny the existence of the Snake God; they just didn’t worship it.
Our ancestral memetic environment was very different from our modern one because most people couldn’t travel far and mass media didn’t exist. As a result, people tended to only interact with their own group; outsiders were demonized and war was frequent. To be part of a tribe was to worship the tribe’s totems or ancestral deities. In an uncertain world where wind and rain, life and death were mysteries in the hands of capricious deities, to not worship the tribal gods was akin to saying you did not care whether your brothers lived or died.
Indeed, the big issue Rome had with Christians and Jews was less that they worshiped some strange god with weird food rules and transubstantiation–the empire had a pretty inclusive attitude of adopting new deities as it encountered them–but that Christians and Jews refused to adopt the empires deities into their pantheon. More to the point, they refused to sacrifice to the Roman gods, which the Romans believed would bring the wrath of the gods on them and showed very poor civic spirit. As Tertullian complained in the second century:
They think the Christians the cause of every public disaster, of every affliction with which the people are visited. If the Tiber rises as high as the city walls, if the Nile does not send its waters up over the fields, if the heavens give no rain, if there is an earthquake, if there is famine or pestilence, straightway the cry is, “Away with the Christians to the lions!
Monotheism of course triumphed over paganism by taking over the empire itself. The conquering of pagans and thus their gods happened on a small scale within Judea, then on a large scale with Rome and Mecca. The big religions now expanded past pure ethnic lines, but still functioned for ordinary people as ethnic identities due to the lack of long-distance travel–Christians, for example, were members of “Christendom,” which stood in contrast with the pagan, barbarian, and non-Christian hordes–places which, of course, the average christian never saw.
But modern technology has drastically changed our memetic environment. Today you can hop in a car or plane and within hours be hundreds or thousands of miles away–distances your ancestors would have taken months to walk. You can pick up your phone and talk to a friend on the other side of the planet, or read headlines detailing the spread of disease in a foreign country. (I have written extensively about this change in the memes category.)
In the ancestral memetic environment, almost everyone you talked to and got information from was either your immediate family or lived in your community. As a result, memes that promote the survival of you, your family, your community, and your genes tend to dominate. Memes that promote the survival of strangers don’t do as well.
In our modern memetic environment, most of the people you talk to and get information from are strangers. You get movie recommendations from strangers on Rotten Tomatoes; you learn about new business ideas from the reporters at Forbes or Wired or The Wall Street Journal; you get parenting advice from a nanny on TV and medical advice from WebMD. You no longer raise barns or herd goats with your brothers, cousins, and extended family, but work in a cubicle farm with a hundred people who probably aren’t even 5th cousins.
As a result, the modern memetic environment favors the horizontal (rather than vertical, ie from parent to child,) meme transfer. This environment favors the spread of memes that prioritize the interests of strangers, simply because so many of the people you are talking to and interacting with are strangers.
The liberal churches–in particular, the Mainline Protestants–have worked hard to signal openness to others, because this is how horizontal morality works. (The group identity of people who define themselves as open to others thus has as its group it’s defined against as “people who aren’t open to others.”) But if religion itself is about group identity, then a group identity of “let’s be open to others and not have a strong group identity” is going to leave people unenthusiastic about attending liberal churches.
Group identity used to be more intuitive for people, again, because they mostly interacted with members of their own group. Modern religious identity for most Christians is no longer explicitly ethnic (not if you want a place in polite society,) so the “outgroup” has switched gay people, who are such a small percent of the population (2-3%) that they’re effectively a symbolic issue for most parishioners. Unlike those dastardly followers of the Snake God, homosexuals have never made their own army, invaded a neighboring tribe’s territory, massacred all of the women and carried off the men.
(This is, in my opinion, a very silly rock to build one’s church on. Certainly churches for the first 1,900 years of Christianity didn’t make this a major, defining point of what makes them different from their competitors. Jesus himself didn’t say a whole lot about gay people.)
And getting back to fertility, people with stronger group identities–such as people whose religions tell them they should have a group identity and it is good to have a group identity that excludes those [evil outgroup people] tend to have more children, who are the literal future of the church.
Summary version: Religion is about group identity, but the modern memetic environment, ie liberalism, is anti-group identity. Churches that try to set themselves up in opposition to group identity therefore fail. But since ethnic identity is no longer in fashion, conservative religious groups now define themselves in opposition to homosexuals, a somewhat symbolic opposition considering that homosexuals have never constituted a military threat to anyone’s ethnic group.
I am on vacation, and so have only been able to take notes on the posts I want to write for the past week. Here is the outline I jotted down in the car:
When Capitalism Devours Democracy
Ken Star, Mueller, the media, and endless for-profit, anti-nation investigations into the president. (Actually, Tom Nichols’s discussion about the evolution of talk radio and Cable News and their deleterious effects on political discourse is one of the better parts of his book, The Death of Expertise.)
The overly complex legal code + endless investigation + the media + advertising dollars => undermining government function.
Watergate, White Water, Monica, Russiagate, etc.
Can you imagine the national reaction if someone tried to investigate George Washington the same way? It would have been seen not as “anti-George Washington,” but as fundamentally anti-American, an attempt to subvert democracy itself and interfere with the proper functioning of the nation.
Note the complexity of the modern legal, economic, and tax systems, which simultaneously make it very hard for anyone doing much of anything to comply with every single law (have you ever jaywalked? Accidentally miscounted a deduction on your taxes?) and ensure that, with enough searching, if you want to pin something bad on someone, you probably can.
Even though you believe in your heart that you have done nothing wrong, you have no idea whether you might be admitting that you did something that is against the law. There are tens of thousands of criminal statutes on the books in America today. Most of them you have never heard of, and many of them involve conduct that nobody would imagine could ever be a crime.
(Unless you’ve been pulled over for speeding. Then obviously you pull out your driver’s license and talk like a normal human.)
In short, the media discovered, with Nixon and Watergate (at least within the past century or so,) that constant presidential scandals could be good for ratings, and certain folks in the government discovered with Bill Clinton and Monica and Lewinsky that if you go digging for long enough, eventually you can find some kind of dirt to pin on someone–even if it’s completely irrelevant, idiotic dirt that has nothing to do with the president’s ability to govern.
This creates the incentive for the Media to constantly push the drumbeat narrative of “presidential scandal!” which leads to people truly believing that there is much more scandal than there really is.
Theory: Monica, Benghazi, Russiagate, and maybe even Watergate were all basically trumped-up hogwash played for ratings dollars. (Well, clearly someone broke into the Watergate hotel.)
The sheer complexity of the modern legal system, which allows this to happen, also incentivizes each party to push for constant investigations of the other party’s presidents. In essence, both sides are moving toward mutual defect-defect, with the media egging them on.
And We the People are the suckers.
I feel like there are concepts here for which we need better words.
Make no mistake: Nichols is annoyingly arrogant. He draws a rather stark line between “experts” (who know things) and everyone else (who should humbly limit themselves to voting between options defined for them by the experts.) He implores people to better educate themselves in order to be better voters, but has little patience for autodidacts and bloggers like myself who are actually trying.
But arrogance alone doesn’t make someone wrong.
Nichols’s first thesis is simple: most people are too stupid or ignorant to second-guess experts or even contribute meaningfully to modern policy discussions. How can people who can’t find Ukraine on a map or think we should bomb the fictional city of Agrabah contribute in any meaningful way to a discussion of international policy?
It was one thing, in 1776, to think the average American could vote meaningfully on the issues of the day–a right they took by force, by shooting anyone who told them they couldn’t. Life was less complicated in 1776, and the average person could master most of the skills they needed to survive (indeed, pioneers on the edge of the frontier had to be mostly self-sufficient in order to survive.) Life was hard–most people engaged in long hours of heavy labor plowing fields, chopping wood, harvesting crops, and hauling necessities–but could be mastered by people who hadn’t graduated from elementary school.
But the modern industrial (or post-industrial) world is much more complicated than the one our ancestors grew up in. Today we have cars (maybe even self-driving cars), electrical grids and sewer systems, atomic bombs and fast food. The speed of communication and transportation have made it possible to chat with people on the other side of the earth and show up on their doorstep a day later. The amount if specialized, technical knowledge necessary to keep modern society running would astonish the average caveman–even with 15+ years of schooling, the average person can no longer build a house, nor even produce basic necessities like clothes or food. Most of us can’t even make a pencil.
Even experts who are actually knowledgeable about their particular area may be completely ignorant of fields outside of their expertise. Nichols speaks Russian, which makes him an expert in certain Russian-related matters, but he probably knows nothing about optimal high-speed rail networks. And herein lies the problem:
The American attachment to intellectual self-reliance described by Tocqueville survived for nearly a century before falling under a series of assaults from both within and without. Technology, universal secondary education, the proliferation of specialized expertise, and the emergence of the United States a a global power in the mid-twentieth century all undermined the idea… that the average American was adequately equipped either for the challenges of daily life or for running the affairs of a large country.
… the political scientist Richard Hofstadter wrote that “the complexity of modern life has steadily whittled away the functions the ordinary citizen can intelligently and competently perform for himself.”
… Somin wrote in 2015 that the “size and complexity of government” have mad it “more difficult for voters with limited knowledge to monitor and evaluate the government’s many activities. The result is a polity in which the people often cannot exercise their sovereignty responsibly and effectively.”
In other words, society is now too complex and people too stupid for democracy.
Nichols’s second thesis is that people used to trust experts, which let democracy function, but to day they are less trusting. He offers no evidence other than his general conviction that this change has happened.
He does, however, detail the way he thinks that 1. People have been given inflated egos about their own intelligence, and 2. How our information-delivery system has degenerated into misinformational goo, resulting in the trust-problems he believes we are having These are interesting arguments and worth examining.
A bit of summary:
Indeed, maybe the death of expertise is a sign of progress. Educated professionals, after all, no longer have a stranglehold on knowledge. The secrets of life are no longer hidden in giant marble mausoleums… in the past, there was less tress between experts and laypeople, but only because citizen were simply unable to challenge experts in any substantive way. …
Participation in political, intellectual, and scientific life until the early twentieth century was far more circumscribed, with debates about science, philosophy, and public policy all conducted by a small circle of educated males with pen and ink. Those were not exactly the Good Old Days, and they weren’t that long ago. The time when most people didn’t finish highschool, when very few went to college, and only a tiny fraction of the population entered professions is still within living memory of many Americans.
Aside from Nichols’s insistence that he believes modern American notions about gender and racial equality, I get the impression that he wouldn’t mind the Good Old Days of genteel pen-and-ink discussions between intellectuals. However, I question his claim that participation in political life was far more circumscribed–after all, people voted, and politicians liked getting people to vote for them. People anywhere, even illiterate peasants on the frontier or up in the mountains like to gather and debate about God, politics, and the meaning of life. The question is less “Did they discuss it?” and more “Did their discussions have any effect on politics?” Certainly we can point to abolition, women’s suffrage, prohibition, and the Revolution itself as heavily grass-roots movements.
But continuing with Nichols’s argument:
Social changes only in the past half century finally broke down old barriers of race, class, and sex not only between Americans and general but also between uneducated citizens and elite expert in particular. A wide circle of debate meant more knowledge but more social friction. Universal education, the greater empowerment of women and minorities, the growth of a middle class, and increased social mobility all threw a minority of expert and the majority of citizens into direct contact, after nearly two centuries in which they rarely had to interact with each other.
And yet the result has not been a greater respect for knowledge, but the growth of an irrational conviction among Americans that everyone is as smart as everyone else.
Nichols is distracting himself with the reflexive racial argument; the important change he is highlighting isn’t social but technical.
I’d like to quote a short exchange from Our Southern Highlanders, an anthropologic-style text written about Appalachia about a century ago:
The mountain clergy, as a general rule, are hostile to “book larnin’,” for “there ain’t no Holy Ghost in it.” One of them who had spent three months at a theological school told President Frost, “Yes, the seminary is a good place ter go and git rested up, but ’tain’t worth while fer me ter go thar no more ’s long as I’ve got good wind.”
It used to amuse me to explain how I knew that the earth was a sphere; but one day, when I was busy, a tiresome old preacher put the everlasting question to me: “Do you believe the earth is round?” An impish perversity seized me and I answered, “No—all blamed humbug!” “Amen!” cried my delighted catechist, “I knowed in reason you had more sense.”
But back to Nichols, who really likes the concept of expertise:
One reason claims of expertise grate on people in a democracy is that specialization is necessarily exclusive. WHen we study a certain area of knowledge or spend oulives in a particular occupation, we not only forego expertise in othe jobs or subjects, but also trust that other pople in the community know what they’re doing in thei area as surely as we do in our own. As much as we might want to go up to the cockpit afte the engine flames out to give the pilots osme helpful tips, we assume–in part, ebcause wehave to–that tye’re better able to cope with the problem than we are. Othewise, our highly evovled society breaks down int island sof incoherence, where we spend our time in poorly infomed second-guessing instead of trusting each other.
This would be a good point to look at data on overall trust levels, friendship, civic engagement, etc (It’s down. It’s all down.) and maybe some explanations for these changes.
Nichols talks briefly about the accreditation and verification process for producing “experts,” which he rather likes. There is an interesting discussion in the economics literature on things like the economics of trust and information (how do websites signal that they are trustworthy enough that you will give them your credit card number and expect to receive items you ordered a few days later?) which could apply here, too.
Nichols then explores a variety of cognitive biases, such a superstitions, phobias, and conspiracy theories:
Conspiracy theories are also a way for people to give meaning to events that frighten them. Without a coherent explanation for why terrible thing happen to innocent people, they would have to accept such occurence as nothing more than the random cruelty either of an uncaring universe or an incomprehensible deity. …
The only way out of this dilemma is to imagine a world in which our troubles are the fault of powerful people who had it within their power to avert such misery. …
Just as individual facing grief and confusion look for reasons where none may exist, so, too, will entire societies gravitate toward outlandish theories when collectively subjected to a terrible national experience. Conspiracy theories and flawed reasoning behind them …become especially seductive “in any society that has suffered an epic, collectively felt trauma. In the aftermath, millions of people find themselves casting about for an answer to the ancient question of why bad things happen to good people.” …
Today, conspiracy theories are reaction mostly to the economic and social dislocations of globalization…This is not a trivial obstacle when it comes to the problems of expert engagement with the public: nearly 30 percent of Americans, for example, think “a secretive elite with a globalist agenda is conspiring to eventually rule the world” …
Obviously stupid. A not-secret elite with a globalist agenda already rules the world.
and 15 percent think media or government add secret mind controlling technology to TV broadcasts. (Another 15 percent aren’t sure about the TV issue.)
It’s called “advertising” and it wants you to buy a Ford.
Anyway, the problem with conspiracy theories is they are unfalsifiable; no amount of evidence will ever convince a conspiracy theorist that he is wrong, for all evidence is just further proof of how nefariously “they” are constructing the conspiracy.
Then Nichols gets into some interesting matter on the difference between stereotypes and generalizations, which segues nicely into a tangent I’d like to discuss, but it probably deserves its own post. To summarize:
Sometimes experts know things that contradict other people’s political (or religious) beliefs… If an “expert” finding or field accords with established liberal values, EG, the implicit association test found that “everyone is a little bit racist,” which liberals already believed, then there is an easy mesh between what the academics believe and the rest of their social class.
If their findings contradict conservative/low-class values, EG, when professors assert that evolution is true and “those low-class Bible-thumpers in Oklahoma are wrong,” sure, they might have a lot of people who disagree with them, but those people aren’t part of their own social class/the upper class, and so not a problem. If anything, high class folks love such finding, because it gives them a chance to talk about how much better they are than those low-class people (though such class conflict is obviously poisonous in a democracy where those low-class people can still vote to Fuck You and Your Global Warming, Too.)
But if the findings contradict high-class/liberal politics, then the experts have a real problem. EG, if that same evolution professor turns around and says, “By the way, race is definitely biologically real, and there are statistical differences in average IQ between the races,” now he’s contradicting the political values of his own class/the upper class, and that becomes a social issue and he is likely to get Watsoned.
Jordan Peterson isn’t unpopular or “silenced” so much as he is disliked by upper class folks and liked by “losers” and low class folks, despite the fact that he is basically an intellectual guy and isn’t peddling a low-class product. Likewise, Fox News is just as much part of The Media as NPR, (if anything, it’s much more of the Media) but NPR is higher class than Fox, and Fox doesn’t like feeling like its opinions are being judged along this class axis.
For better or for worse (mostly worse) class politics and political/religious beliefs strongly affect our opinions of “experts,” especially those who say things we disagree with.
But back to Nichols: Dunning-Kruger effect, fake cultural literacy, and too many people at college. Nichols is a professor and has seen college students up close and personal, and has a low opinion of most of them. The massive expansion of upper education has not resulted in a better-educated, smarter populace, he argues, but a populace armed with expensive certificates that show the sat around a college for 4 years without learning much of anything. Unfortunately, beyond a certain level, there isn’t a lot that more school can do to increase people’s basic aptitudes.
Colleges get money by attracting students, which incentivises them to hand out degrees like candy–in other words, students are being lied to about their abilities and college degrees are fast becoming the participation trophies for the not very bright.
Nichols has little sympathy for modern students:
Today, by contrast, students explode over imagined slights that are not even remotely int eh same category as fighting for civil rights or being sent to war. Students now build majestic Everests from the smallest molehills, and they descend into hysteria over pranks and hoaxes. In the midst of it all, the students are learning that emotions and volume can always defeat reason and substance, thus building about themselves fortresses that no future teacher, expert, or intellectual will ever be able to breach.
At Yale in 2015, for example, a house master’s wife had the temerity to tell minority students to ignore Halloween costumes they thought offensive. This provoked a campus wide temper tantrum that included professors being shouted down by screaming student. “In your position as master,” one student howled in a professor’s face, “it is your job to create a place of comfort and home for the students… Do you understand that?!”
Quietly, the professor said, “No, I don’t agree with that,” and the student unloaded on him:
“Then why the [expletive] did you accept the position?! Who the [expletive] hired you?! You should step down! If that is what you think about being a master you should step down! It is not about creating an intellectual space! It is not! Do you understand that? It’s about creating a home here. You are not doing that!” [emphasis added]
Yale, instead of disciplining students in violation of their own norms of academic discourse, apologized to the tantrum throwers. The house master eventually resigned from his residential post…
To faculty everywhere, the lesson was obvious: the campus of a top university is not a place for intellectual exploration. It is a luxury home, rented for four to six years, nine months at a time, by children of the elite who may shout at faculty as if they’re berating clumsy maids in a colonial mansion.
The incident Nichols cites (and similar ones elsewhere,) are not just matters of college students being dumb or entitled, but explicitly racial conflicts. The demand for “safe spaces” is easy to ridicule on the grounds that students are emotional babies, but this misses the point: students are carving out territory for themselves on explicitly racial lines, often by violence.
Nichols, though, either does not notice the racial aspect of modern campus conflicts or does not want to admit publicly to doing so.
Nichols moves on to blame TV, especially CNN, talk radio, and the internet for dumbing down the quality of discourse by overwhelming us with a deluge of more information than we can possibly process.
Referring back to Auerswald and The Code Economy, if automation creates a bifurcation in industries, replacing a moderately-priced, moderately available product with a stream of cheap, low-quality product on the one hand and a trickle of expensive, high-quality products on the other, good-quality journalism has been replaced with a flood of low-quality crap. The high-quality end is still working itself out.
Accessing the Internet can actually make people dumber than if they had never engaged a subject at all. The very act of searching for information makes people think they’ve learned something,when in fact they’re more likely to be immersed in yet more data they do not understand. …
When a group of experimental psychologists at Yale investigated how people use the internet, they found that “people who search for information on the Web emerge from the process with an inflated sense of how much they know–even regarding topic that are unrelated to the ones they Googled.” …
How can exposure to so much information fail to produce at least some kind of increased baseline of knowledge, if only by electronic osmosis? How can people read so much yet retain so little? The answer is simple: few people are actually reading what they find.
As a University College of London (UCL) study found, people don’t actually read the articles they encounter during a search on the Internet. Instead, they glance at the top line or the first few sentences and then move on. Internet users, the researchers noted, “Are not reading online in the traditional sense; indeed, there are signs that new forms of ‘reading’ are emerging as users ‘power browse’ horizontally through titles, contents pages and abstracts going for quick wins. It almost seems that they go online to avoid reading in the traditional sense.”
The internet’s demands for instant updates, for whatever headlines generate the most clicks (and thus advertising revenue), has upset the balance of speed vs. expertise in the newsroom. No longer have reporters any incentive to spend long hours carefully writing a well-researched story when such stories pay less than clickbait headlines about racist pet costumes and celebrity tweets.
I realize it seems churlish to complain about the feast of news and information brought to us by the Information Age, but I’m going to complain anyway. Changes in journalism, like the increased access to the Internet and to college education, have unexpectedly corrosive effects on the relationship between laypeople and experts. Instead of making people better informed, much of what passes for news in the twenty-first century often leaves laypeople–and sometimes experts–even more confused and ornery.
Experts face a vexing challenge: there’s more news available, and yet people seem less informed, a trend that goes back at least a quarter century. Paradoxically, it is a problem that is worsening rather than dissipating. …
As long ago as 1990, for example, a study conducted by the Pew Trust warned that disengagement from important public questions was actually worse among people under thirty, the group that should have been most receptive to then-emerging sources of information like cable television and electronic media. This was a distinct change in American civic culture, as the Pew study noted:
“Over most of the past five decades younger members of the public have been at least as well informed as older people. In 1990, that is no longer the case. … “
Those respondents are now themselves middle-aged, and their children are faring no better.
If you were 30 in 1990, you were born in 1960, to parents who were between the ages of 20 and 40 years old, that is, born between 1920 and 1940.
Fertility for the 1920-1940 cohort was strongly dysgenic. So was the 1940-50 cohort. The 1900-1919 cohort at least had the Flynn Effect on their side, but later cohorts just look like an advertisement for idiocracy.
Nichols ends with a plea that voters respect experts (and that experts, in turn, be humble and polite to voters.) After all, modern society is too complicated for any of us to be experts on everything. If we don’t pay attention to expert advice, he warns, modern society is bound to end in ignorant goo.
The logical inconsistency is that Nichols believes in democracy at all–he thinks democracy can be saved if ignorant people vote within a range of options as defined by experts like himself, eg, “What vaccine options are best?” rather than “Should we have vaccines at all?”
The problem, then, is that whoever controls the experts (or controls which expert opinions people hear) controls the limits of policy debates. This leads to people arguing over experts, which leads right back where we are today. As long as there are politics, “expertise” will be politicized, eg:
Look at any court case in which both sides bring in their own “expert” witnesses. Both experts testify to the effect that their side is correct. Then the jury is left to vote on which side had more believable experts. This is like best case scenario voting, and the fact that the voters are dumb and don’t understand what the experts are saying and are obviously being mislead in many cases is still a huge problem.
If politics is the problem, then perhaps getting rid of politics is the solution. Just have a bunch of Singapores run by Lee Kwan Yews, let folks like Nichols advise them, and let the common people “vote with their feet” by moving to the best states.
The problem with this solution is that “exit” doesn’t exist in the modern world in any meaningful way, and there are significant reasons why ordinary people oppose open borders.
Conclusion: 3/5 stars. It’s not a terrible book, and Nichols has plenty of good points, but “Americans are dumb” isn’t exactly fresh territory and much has already been written on the subject.
People think memetic viruses are just going to ask politely about infecting you, like the Jehovah’s Witnesses: “Hello, can I talk to you today about the importance of WWIII with Russia?”
No. Mind-viruses are not polite. They USE you. They use your empathy and compassion to make you feel like a shit person for rejecting them. They throw dying children in your face and demand that you start a war to save them.
They hijack your sense of yourself as a good person.
I call this the empathy trap.
Why did this take Stone Cold’s breath away? Why is it shocking?
It’s a basically true statement– the 3/5ths compromise originated in 1783 and was still around in 1789, when the 2nd Amendment was proposed–but soare “California became the 31st American state when I was deemed 3/5ths of a person,” “Napoleon invaded Russia when I was deemed 3/5ths of a person” and “The New York Times was founded, the safety elevator was invented, Massachusetts passed the nation’s first child employment laws, the first telegrams were sent, and Jane Eyre was published when I was deemed 3/5ths of a person.”
A lot happened between 1783 and 1861.
As unpleasant as the 3/5ths compromise is to think back on, we should remember that it was not passed because proponents thought black people only counted as “3/5ths of a person,” but because they didn’t want slave owners using census counts of non-voting slaves to get more votes for their states in the federal government. The 3/5ths compromise actually reduced the power of the slave-owning states relative to the non-slave owning states, in exchange for a break on taxes.
So this isn’t shocking because it’s factually true (I can come up with a whole list of equally true but unshocking statements) nor because the 3/5ths compromise was evil.
Perhaps it is shocking because it points out how old the 2nd Amendment is? But there are many other equally old–or older–things we find completely mundane. Mozart was writing operas in the 1790s; US copyright law began in the 1790s; Edward Jenner developed his smallpox vaccine in 1796; Benjamin Franklin invented the “swim fin” or flippers back in 1717. I don’t think anyone’s throwing out their flippers just because the concept is older than the entire country.
No; it’s shocking because “I was deemed 3/5ths of a person” appeals immediately to your sense of empathy.
Do you respond, “That doesn’t matter”?
“What do you mean, it doesn’t matter that I was considered only 3/5ths of a person? That matters a lot to me.”
“Oh, no, of course, I didn’t mean that it doesn’t matter like that, of course I understand that matters to you–”
Now you’re totally off-topic.
In order to see that this is a non sequitor, you first have to step back from the emotion. Push it aside, if you must. Yes, slavery was evil, but what does it have to do with the 2nd Amendment? Nothing. Reject the frame.
Mitochondrial memes are passed down from your parents and other trusted members of your family and community. You don’t typically have to be convinced of them; children tend to just believe their parents. That’s why you believed all of that business about Santa Claus. Meme viruses, by contrast, come from the wider community, typically strangers. Meme viruses have to convince you to adopt them, which can be quite a bit harder. This is why so many people follow their parents’ religion, and so few people convert to new religions as adults. Most religious transmission is basically mitochondrial–even if the Jehovah’s Witnesses show up at your doorstep fairly often.
To spread faster and more effectively, therefore, meme viruses have to convince you to lower your defenses and let them spread. They convince you that believing and spreading them is part of being a good person. They demand that if you really care about issue X, then you must also care about issue W, Y, and Z. “If you want to fight racism, you also have to go vegan, because all systems of oppression are intersectionally linked,” argues the vegan. “If you love Jesus, you must support capitalism because those godless commies hate Jesus.” Jesus probably also supported socialism and veganism, depending on whom you ask. “This photo of Kim Kardashian balancing a wine glass on her ass is problematic because once someone took a picture of a black woman in the same pose and that was racist.” “Al Qaeda launched an attack on 9-11, therefore we need to topple Saddam Hussein.” “A Serbian anarchist shot some Austro-Hungarian arch duke, therefore we need to have WWI.” “Assad used chemical weapons, therefore the US needs to go to war with Russia.”
Once you are sensitive to this method of framing, you’ll notice it fairly often.
There is a commonly-believed strategic model of terrorism which we could describe as follows: terrorists are people who are ideologically motivated to pursue specific unvarying political goals; to do so, they join together in long-lasting organizations and after the failure of ordinary political tactics, rationally decide to efficiently & competently engage in violent attacks on (usually) civilian targets to get as much attention as possible and publicity for their movement, and inspire fear & terror in the civilian population, which will pressure its leaders to solve the problem one way or another, providing support for the terrorists’ favored laws and/or their negotiations with involved governments, which then often succeed in gaining many of the original goals, and the organization dissolves.
Unfortunately, this model, is in almost every respect, empirically false.
It’s a great essay, so go read the whole thing before we continue. Don’t worry; I’ll wait.
Now, since I know half of you didn’t actually read the essay, I’ll summarize: terrorists are really bad at accomplishing their “objectives.” By any measure, they are really bad at it. Simply doing nothing would, in most cases, further their political goals more effectively.
This is in part because terrorists tend not to conquer and hold land, and in part because terrorism tends to piss off its targets, making them less likely to give in to the terrorists’ demands. Consider 9-11: sure, the buildings fell down, but did it result in America conceding to any of Al-Qaeda’s demands?
The article quotes Abrams 2012:
Jones and Libicki (2008) then examined a larger sample, the universe of known terrorist groups between 1968 and 2006. Of the 648 groups identified in the RAND-MIPT Terrorism Incident database, only 4% obtained their strategic demands. … Chenoweth and Stephan (2008, 2011) provide additional empirical evidence that meting out pain hurts non-state actors at the bargaining table. … These statistical findings are reinforced with structured in-case comparisons highlighting that escalating from nonviolent methods of protest such as petitions, sit-ins, and strikes to deadly attacks tends to dissuade government compromise. … Other statistical research (Abrahms, 2012, Fortna, 2011) demonstrates that when terrorist attacks are combined with such discriminate violence, the bargaining outcome is not additive; on the contrary, the pain to the population significantly decreases the odds of government concessions.3
(Aside: Remember, right-wing violence doesn’t work. It’s stupid and you will fail at accomplishing anything.)
Another “mystery” about terrorism is that it actually doesn’t happen very often. It’s not that hard to drive a truck into a crowd or attack people with a machete. Armies are expensive; coughing on grocery store produce is cheap.
If terrorism is 1. ineffective and 2. not even used that often, why do terrorist groups exist at all?
Terrorists might just be dumb, stupid people who try to deal with their problems by blowing them up, but there’s no evidence to this effect–terrorists are not less intelligent than the average person in their societies, anyway. People who are merely dumb and violent tend to get into fights with their neighbors, not take airplanes hostage.
Gwern suggests a different possibility: People join terrorist organizations because they want to be friends with the other terrorists. They’re like social clubs, but instead of bowling, you talk about how going on jihad would be totally awesome.
Things people crave: Meaning. Community. Brotherhood.
Terrorist organizations provide these to their members, most of whom don’t actually blow themselves up.
Friendships cultivated in the jihad, just as those forged in combat in general, seem more intense and are endowed with special significance. Their actions taken on behalf of God and the umma are experienced as sacred. This added element increases the value of friendships within the clique and the jihad in general and diminishes the value of outside friendships.
Enough about terrorists; let’s talk about Americans:
“Jihad” is currently part of the Islamic cultural script–that is, sometimes Muslims see some form of “jihad” as morally acceptable. (They are not unique in committing terrorism, though–Marxist terrorists have created trouble throughout Latin America, for instance, and the Tamil Tigers of Sri Lanka were one of the world’s deadliest groups.)
Thankfully, though, few major groups in the US see jihad or terrorist violence as acceptable, but… we have our exceptions.
For example, after a Jewish professor, Bret Weinstein, declined to stay home on a “Day of Absence” intended to force whites away from Evergreen State College, WA, violent protests erupted. Bands of students armed with bats and tasers roamed the campus, searching for Weinstein; the poor professor was forced to flee and eventually resign.
During a Berkeley protest on August 27, 2017, an estimated one hundred antifa protesters joined a crowd of 2,000–4,000 counter-protesters to attack a reported “handful” of alt-right demonstrators and Trump supporters who showed up for a “Say No to Marxism” rally that had been cancelled by organizers due to security concerns. Some antifa activists beat and kicked unarmed demonstrators and threatened to smash the cameras of anyone who filmed them.
Antifa, like terrorist groups, typically attract folks who are single and have recently left home–young people who have just lost the community they were raised in and in search of a new one.
The article recounts an amusing incident when a terrorist organization wanted to disband a cell, but struggled to convince its members to abandon their commitment to sacrificing themselves on behalf of jihad. Finally they hit upon a solution: they organized social get-togethers with women, then incentivised the men to get married, get jobs, and have babies. Soon all of the men were settled and raising children, too busy and invested in their new families to risk sacrificing it all for jihad. The cell dissolved.
Even Boko Haram was founded in response to the difficulties young men in Nigeria face in affording brides:
Our recent study found that marriage markets and inflationary brideprice are a powerful driver of participation in violence and drive recruitment into armed groups. Armed groups often arrange low-cost marriages for their members, help members afford brideprice, or provide extra-legal opportunities to acquire the capital necessary to take a wife. In Nigeria, in the years in which Boko Haram gained influence under founder Mohammed Yusuf, “items required for [a] successful [marriage] celebration kept changing in tune with inflation over the years.”66 A resident of the Railroad neighborhood of Maiduguri, where Yusuf established his mosque, recalled that in just a few years, Yusuf had facilitated more than 500 weddings. The group also provided support for young men to become “okada drivers,” who gained popularity for their affordable motorbike taxi services — who often used their profits to afford marriage. Thus, Boko Haram’s early recruits were often attracted by the group’s facilitation of marriage. Even in the aftermath of Yusuf’s assassination by the Nigerian state and the rise of Abubakar Shekau, the group has continued to exploit obstacles to marriage to attract supporters. The women and girls that are abducted by the group, estimated to number more than 6,000, are frequently married off to members of the group.
Antifa of course aren’t the only people in the US who commit violence; the interesting fact here is their organization. As far as I know, Dylan Roof killed more people than Antifa, but Roof acted alone.
I suggest, therefore, that the principle thing driving Antifa (and similar organizations) isn’t a rational pursuit of their stated objectives (did driving Milo out of Berkley actually protect any illegal immigrants from deportation?) but the same social factors that drive Muslims to join terrorist groups: camaraderie, brotherhood, and the feeling like they are leading meaningful, moral lives by sacrificing themselves for their chosen cause.
Right-wingers do this, too (the military is an obvious source of “meaning” and “brotherhood” in many people’s lives).
And the pool of unmarried people to recruit into extremist organizations is only growing in America.
But we don’t have to look to organizations that commit violence to find this pattern. Why change one’s avatar to a rainbow pattern to celebrate gay marriage or overlay a French flag after the Charlie Hebdo attack?
Why spend hours “fighting racism” by “deconstructing whiteness” online when you could do far more to help black people by handing out sandwiches at your local homeless shelter? (The homeless would also appreciate a hot lasagna.) What percentage of people who protest Islamophobia have actually bothered to befriend some Muslims and express support toward them?
The obvious answer is that these activities enhance the actor’s social standing among their friends and online compatriots. Congratulations received for turning your profile picture different colors: objective achieved. Actions that would actually help the targeted group require more effort and return less adulation, since they have to be done in real life.
Liberal groups seem to be better at social organizing–thus I’ve had an easier time coming up with liberal examples of this phenomenon. Conservative political organizations, at least in the US, seem to be smaller and offer less in the way of social benefits (this may be in part because conservatives are more likely to be married, employed, and have children, and because conservatives are more likely to channel such energies into their churches,) but they also do their share of social signaling that doesn’t achieve its claimed goal. “White pride” organizations, for example, generally do little to improve whites’ public image.
But is this an aberration? Or are things operating as designed? What’s the point of friendship and social standing in the first place?
Interestingly, in JaneGoodall‘s account of chimps in the Gombe, we see parallels to the origins of human social structures and friendships. Only male chimps consistently have what we would call “friendships;” females instead tend to live in groups with their children. Male friends benefit from each other’s assistance in hunting and controlling access to other food, like the coveted bananas. A single strong male may dominate a troop of chimps, but a coalition can bring him to a bloody end. Persistent dominance of a chimp troop (and thus dominance of food) is thus easier for males who have a strong coalition on their side–that is, friends.
From these things therefore it is clear that the city-state is a natural growth, and that man is by nature a political animal, and a man that is by nature and not merely by fortune citiless is either low in the scale of humanity or above it … inasmuch as he is solitary, like an isolated piece at draughts.
And why man is a political animal in a greater measure than any bee or any gregarious animal is clear. For nature, as we declare, does nothing without purpose; and man alone of the animals possesses speech. … speech is designed to indicate the advantageous and the harmful, and therefore also the right and the wrong; for it is the special property of man in distinction from the other animals that he alone has perception of good and bad and right and wrong and the other moral qualities, and it is partnership in these things that makes a household and a city-state.
Most people desire to be members in good standing in their communities:
Thus also the city-state is prior in nature to the household and to each of us individually.  For the whole must necessarily be prior to the part; since when the whole body is destroyed, foot or hand will not exist except in an equivocal sense… the state is also prior by nature to the individual; for if each individual when separate is not self-sufficient, he must be related to the whole state as other parts are to their whole, while a man who is incapable of entering into partnership, or who is so self-sufficing that he has no need to do so, is no part of a state, so that he must be either a lower animal or a god.
Therefore the impulse to form a partnership of this kind is present in all men by nature… –Aristotle, Politics, Book 1
The spread of the internet has changed both who we’re talking to (the people in our communities) and how we engage with them, resulting in, I hypothesize, a memetic environment that increasingly favors horizontally (rather than vertically) transmitted memes. (If you are not familiar with this theory, I wrote about it here, here, here, here, here, here, here, and here.) Vertically spread memes tend to come from your parents and are survival-oriented; horizontal memes come from your friends and are social. A change in the memetic environment, therefore, has the potential to change the landscape of social, moral, and political ideas people frequently encounter–and has allowed us to engage in nearly costless, endless social signaling.
The result of that, it appears, is political polarization:
According to Pew:
A decade ago, the public was less ideologically consistent than it is today. In 2004, only about one-in-ten Americans were uniformly liberal or conservative across most values. Today, the share who are ideologically consistent has doubled: 21% express either consistently liberal or conservative opinions across a range of issues – the size and scope of government, the environment, foreign policy and many others.
The new survey finds that as ideological consistency has become more common, it has become increasingly aligned with partisanship. Looking at 10 political values questions tracked since 1994, more Democrats now give uniformly liberal responses, and more Republicans give uniformly conservative responses than at any point in the last 20 years.
This, of course, makes it harder for people to find common ground for compromises.
So if we want a saner, less histrionic political culture, the first step may be encouraging people to settle down, get married, and have children, then work on building communities that let people feel a sense of meaning in their real lives.
Still, I think letting your friends convince you that blowing yourself is a good idea is pretty dumb.
Note: “Memes” on this blog is used as it is in the field of memetics, representing units of ideas that are passed from person to person, not in the sense of “funny cat pictures on the internet.”
“Mitochondrial memes” are memes that are passed vertically from parent to child, like “it’s important to eat your dinner before desert” or “brush your teeth twice a day or your teeth will rot out.”
“Meme viruses” (I try to avoid the confusing phrase, “viral memes,”) are memes that are transmitted horizontally through society, like chain letters and TV news.
I’ve spent a fair amount of time warning about some of the potential negative results of meme viruses, but today I’d like to discuss one of their greatest strengths: you can transmit them to other people without using them yourself.
Let’s start with genetics. It is very easy to quickly evolve in a particular direction if a variety of relevant traits already exist in a population. For example, humans already vary in height, so if you wanted to, say, make everyone on Earth shorter, you would just have to stop all of the tall people from reproducing. The short people would create the next generation, and it would be short.
But getting the adult human height below 3″ tall requires not just existing, normal human height variation, but exploiting random mutations. These are rare and the people who have them normally incur huge reductions in fitness, as they often have problems with bone growth, intelligence, and giving birth.
Most random mutations simply result in an organism’s death. Very few are useful, and those that are have to beat out all of the other local genetic combinations to actually stick around.
Suppose you happen to be born with a very lucky genetic trait: a rare mutation that lets you survive more easily in an arctic environment.
But you were born in Sudan.
Your genetic trait could be really useful if you could somehow give it away to someone in Siberia, but no, you are stuck in Sudan and you are really hot all of the time and then you die of heatstroke.
With the evolution of complex thought, humans (near alone among animals) developed the ability to go beyond mere genetic abilities, instincts, and impulses, and impart stores of knowledge to the next generation. Humanity has been accumulating mitochondrial memes for millions of years, ever since the first human showed another human how to wield fire and create stone tools. (Note: the use of fire and stone tools predates the emergence of homo Sapiens by a long while, but not the Homo genus.)
But mitochondrial memes, to get passed on, need to offer some immediate benefit to their holders. Humans are smart enough–and the utility of information unpredictable enough–that we can hold some not obviously useful or absurd ideas, but the bulk of our efforts have to go toward information that helps us survive.
(By definition, mitochondrial memes aren’t written down; they have to be remembered.)
If an idea doesn’t offer some benefit to its holder, it is likely to be quickly forgotten–even if it could be very useful to someone else.
Suppose one day you happen to have a brilliant new idea for how to keep warm in a very cold environment–but you live in Sudan. If you can’t tell your idea to anyone who lives somewhere cold, your idea will never be useful. It will die with you.
But introduce writing, and ideas of no use to their holder can be recorded and transmitted to people who can use them. For example, in 1502, Leonardo da Vinci designed a 720-foot (220 m) bridge for Ottoman SultanBeyazid II of Constantinople. The sultan never built Leonardo’s bridge, but in 2001, a bridge based on his design was finally built in Norway. Leonardo’s ideas for flying machines, while also not immediately useful, inspired generations of future engineers.
Viral memes don’t have to be immediately useful to stick around. They can be written down, tucked into a book, and picked up again a hundred years later and a thousand miles away by someone who can use them. A person living in Sudan can invent a better way to stay warm, write it down, and send it to someone in Siberia–and someone in Siberia can invent a better way to stay cool, write it down, and send it back.
Many modern scientific and technological advances are based on the contributions of not one or two or ten inventors, but thousands, each contributing their unpredictable part to the overall whole. Electricity, for example, was a mere curiosity when Thales of Miletus wrote about effects of rubbing amber to produce static electricity (the word “electricity” is actually derived from the Greek for “amber”;) between 1600 and 1800, scientists began studying electricity in a more systematic way, but it still wasn’t useful. It was only with the invention of the telegraph from many different electrical parts and systems, (first working model, 1816; first telegram sent in the US, 1838;) that electricity became useful. With the invention of electric lights and the electrical grids necessary to power them (1870s and 80s,) electricity moved into people’s homes.
The advent of meme viruses has thus given humanity two gifts: 1. People can use technology like books and the internet to store more information than we can naturally, like external hard-drives for our brains; and 2. we can preserve and transmit ideas that aren’t immediately useful to ourselves to people who can use them.
Homo sapiens is about 200-300,000 years old, depending on exactly where you draw the line between us and our immediate ancestors. Printing (and eventually mass literacy) only got going about 550 years ago, with the development of the Gutenberg press. TV, radio, movies, and the internet only became widespread within the past century, and internet in the past 25 years.
In other words, for 99.99% of human history, “mass media” didn’t exist.
How did illiterate peasants learn about the world, if not from books, TV, or Youtube videos? Naturally, from each other: parents passed knowledge to children; tribal elders taught their wisdom to other members of their tribes; teenagers were apprenticed to masters who already knew a trade, etc.
A hundred years ago, if you wanted to know how to build a wagon, raise a barn, or plant corn, you generally had to find someone who knew how to do so and ask them. Today, you ask the internet.
Getting all of your information from people you know is limiting, but it has two advantages: you can easily judge whether the source of your information is reliable, (you’re not going to take farming advice from your Uncle Bob whose crops always fail,) and most of the people giving you information have your best interests at heart.
The internet’s strength is that it lets us talk to people from outside our own communities; it’s weakness is that this makes it much easier for people (say, Nigerian princes with extra bank accounts,) to get away with lying. They also have no particular interest one way or another in your survival–unlike your parents.
In a mitochondrial memetic environment (that is, an environment where you get most of your information from relatives,) memes that could kill you tend to get selected against: parents who encourage their children to eat poison tend not to have grandchildren. From an evolutionary perspective, deadly memes are selected against in a mitochondrial environment; memes will evolve to support your survival.
By contrast, in a viral meme environment, (that is, an environment where ideas can easily pass from person to person without anyone having to give birth,) your personal survival is not all that important to the idea’s success.
So one of the risks of viral memes is getting scammed: memetically, infected by an idea that sounds good but actually benefits someone else at your expense.
In the mitochondrial environment, we expect people to be basically cautious; in the viral, less cautious.
Suppose we have two different groups (Group A and Group B) interacting. 25% of Group B is violent criminals, versus 5% of Group A. Folks in group A would quite logically want to avoid Group B. But 75% of Group B is not violent criminals, and would logically not want to be lumped in with criminals. (For that matter, neither do the 25% who are.)
In an ideal world, we could easily sort out violent criminals from the rest of the population, allowing the innocent people to freely associate. In the real world, we have to make judgment calls. Lean a bit toward the side of caution, and you exclude more criminals, but also more innocents; lean the opposite direction and innocent people have an easier time finding jobs and houses, but more people get killed by criminals.
Let’s put it less abstractly: suppose you are walking down a dimly-lit street at night and see a suspicious looking person coming toward you. It costs you almost nothing to cross the street to avoid them, while not crossing the street could cost you your life. The person you avoided, if they are innocent, incurs only the expense of potentially having their feelings hurt; if they are a criminal, they have lost a victim.
Companies also want to avoid criminals, which makes it hard for ex-cons to get jobs (which is an issue if we want folks who are no longer in prison to have an opportunity to earn an honest living besides going on welfare.) Unfortunately, efforts to improve employment chances for ex-cons by preventing employers from inquiring directly about criminal history have resulted in employers using rougher heuristics to exclude felons, like simply not hiring young African American males. Since most companies have far more qualified job applicants than available jobs, the cost to them of excluding young African American males is fairly low–while the cost to African Americans is fairly high.
One of the interesting things about the past 200 years is the West’s historically unprecedented shift from racial apartheid/segregation and actual race-based slavery to full legal (if not always de facto) racial integration.
One of the causes of this shift was doubtless the transition from traditional production modes like farming and horticulture to the modern, industrial economy. Subsistence farming didn’t require a whole lot of employees. Medieval peasants didn’t change occupations very often: most folks ended up working in the same professions as their parents, grandparents, and great-grandparents (usually farming,) probably even on the same estate.
It was only with industrialization that people and their professions began uncoupling; a person could now hold multiple different jobs, in different fields, over the span of years.
Of course, there were beginnings of this before the 1800s–just as people read books before the 1800s–but accelerating technological development accelerated the trends.
But while capitalists want to hire the best possible workers for the lowest possible wages, this doesn’t get us all the way to the complete change we’ve witnessed in racial mores. After all, companies don’t want to hire criminals, either, and any population that produces a lot of criminals tends not to produce a whole lot of really competent workers.
However, the rise of mass communication has allowed us to listen to and empathize with far more people than ever before. When Martin Luther King marched on Washington and asked to be judged by the content of his character rather than the color of his skin, his request only reached national audiences because of modern media, because we now live in a society of meme viruses. And it worked: integration happened.
Also, crime went up dramatically:
While we’re at it:
Integration triggered a massive increase in crime, which only stopped because… well, we’re not sure, but a corresponding massive increase in the incarceration rate (and sentences) has probably stopped a lot of criminals from committing additional crimes.
Most of these homicides were black on black, but plenty of the victims were white, even as they sold their devalued homes and fled the violence. (Housing integration appears to have struck America’s “ethnic” neighborhoods of Italians, Irish, and Jews particularly hard, destroying coherent communities and, I assume, voting blocks.)
From the white perspective, integration was tremendously costly: people died. Segregation might not be fair, it might kill black people, but it certainly prevented the murder of whites. But segregation, as discussed, does have some costs for whites: you are more limited in all of your transactions, both economic and personal. You can’t sell your house to just anyone you want. Can’t hire anyone you want. Can’t fall in love with anyone you want.
But obviously segregation is far more harmful to African Americans.
Despite all of the trouble integration has caused for whites, the majority claim to believe in it–even though their feet tell a different story. This at least superficial change in attitudes, I believe, was triggered by the nature of the viral memetic environment.
Within the mitochondrial meme environment, you listen to people who care about your survival and they pass on ideas intended to help you survive. They don’t typically pass on ideas that sacrifice your survival for the sake of others, at least not for long. Your parents will tell you that if you see someone suspicious, you should cross the street and get away.
In the viral environment, you interact far more with people who have their own interests in mind, not yours, and these folks would be perfectly happy for you to sacrifice your survival for their sake. The good folks at Penn State would like you to know that locking your car door when a black person passes by is a “microaggression:”
Former President Obama once said in his speech that he was followed when he was shopping in a store, heard the doors of cars locked as he was walking by, and a woman showed extremely nervousness as he got on an elevator with him (Obama, 2013). Those are examples of nonverbal microaggressions. It is disturbing to learn that those behaviors are often automatic that express “put-downs” of individuals in marginalized groups (Pierce et al., 1977). What if Obama were White, would he receive those unfair treatments?
(If Obama were white, like Hillary Clinton, he probably wouldn’t have been elected president.)
For some reason, black people shoplifting, carjacking, or purse-snatching are never described as “microaggressions;” a black person whose feelings are hurt has been microaggressed, but a white person afraid of being robbed or murdered has not been.
This post was actually inspired by an intra-leftist debate:
Shortly after the highly successful African-star-studded movie Black Panther debuted, certain folks, like Faisal Kutty, started complaining that the film is “Islamophobic” because of a scene where girls are rescued from a Boko Haram-like organization.
Never mind that Boko Haram is a real organization, that it actually kidnaps girls, that it has killed more people than ISIS and those people it murders are Africans. Even other Black African Muslims think Boko Haram is shit. (Though obviously BH has its supporters.)
Here we have two different groups of people with different interests: one, Muslims with no particular ties to Africa who don’t want people to associate them with Boko Haram, and two, Black Muslims who don’t want to get killed by folks like Boko Haram.
It is exceedingly disingenuous for folks like Faisal Kutty to criticize as immoral an accurate portrayal of a group that is actually slaughtering thousands of people just because he might accidentally be harmed by association. More attention on Boko Haram could save lives; less attention could result in more deaths–the dead just wouldn’t be Kutty, who is safe in Canada.
Without mass media, I don’t think this kind of appeal works: survival memes dominate and people take danger very seriously. “Some stranger in Canada might be inconvenienced over this” loses to “these people slaughter children.” With mass media, the viral environment allows appeals to set aside your own self-interest and ignore danger in favor of “fairness” and “equality” for everyone in the conversation to flourish.
So far this post has focused primarily on the interests of innocent people, but criminals have interests, too–and criminals would like you to make it easier for them to commit crime.
Simon Mol (6 November 1973 in Buea, Cameroon – 10 October 2008) was the pen name of Simon Moleke Njie, a Cameroon-born journalist, writer and anti-racist political activist. In 1999 he sought political asylum in Poland; it was granted in 2000, and he moved to Warsaw, where he became a well-known anti-racist campaigner. …
In 2005 he organized a conference with Black ambassadors in Poland to protest the claims in an article in Wiedza i Życie by Adam Leszczyński about AIDS problems in Africa, which quoted research stating that a majority of African women were unable to persuade their HIV positive husbands to wear condoms, and so later got caught HIV themselves. Mol accused Leszczyński of prejudice because of this publication. …
Honorary member of the British International Pen Club Centre.
In 2006 Mol received the prestigious award “Oxfam Novib/PEN Award for Freedom of Expression”.
In February 2006, further to his partner’s request for him to take an HIV test, Mol declined and published a post on his blog explaining why not:
Character assassination isn’t a new phenomenon. However, it appears here the game respects no rules. It wouldn’t be superfluous to state that there is an ingrained, harsh and disturbing dislike for Africans here. The accusation of being HIV positive is the latest weapon that as an African your enemy can raise against you. This ideologically inspired weapon, is strengthened by the day with disturbing literature about Africa from supposed-experts on Africa, some of whom openly boast of traveling across Africa in two weeks and return home to write volumes. What some of these hastily compiled volumes have succeeded in breeding, is a social and psychological conviction that every African walking the street here is supposedly HIV positive, and woe betide anyone who dares to unravel the myth being put in place.
On the 3rd of January 2007 Mol was taken into custody by the Polish police and charged with infecting his sexual partners with HIV. …
According to the Rzeczpospolita newspaper, he was diagnosed with HIV back in 1999 while living in a refugee shelter, but Polish law does not force an HIV carrier to reveal his or her disease status.
According to the police inspector who was investigating his case, a witness stated that Mol refused to wear condoms during sex. An anonymous witness in one case said that he accused a girl who demanded he should wear them that she was racist because as he was Black she thought he must be infected with HIV. After sexual intercourse he used to say to his female partners that his sperm was sacred.
In an unusual move, his photo with an epidemiological warning, was ordered to be publicly displayed by the then Minister of Justice Zbigniew Ziobro. MediaWatch, a body that monitors alleged racism, quickly denounced this decision, asserting that it was a breach of ethics with racist implications, as the picture had been published before any court verdict. They saw it as evidence of institutional racism in Poland, also calling for international condemnation. …
After police published Mol’s photo and an alert before the start of court proceedings, Warsaw HIV testing centers were “invaded by young women”. A few said that they knew Mol. Some of the HIV tests have been positive. According to the police inspector who had been monitoring the tests and the case: “Some women very quickly started to suffer drug-resistant tonsillitis and fungal infections. They looked wasted, some lost as many as 15 kilograms and were deeply traumatized, impeding us taking the witness statements. 18 additional likely victims have been identified thereby”. Genetic tests of the virus from the infectees and Simon proved that it was specific to Cameroon.
In other words, Simon Mol was a sociopath who used the accusation of “racism” to murder dozens of women.
Criminals–of any race–are not nice people. They will absolutely use anything at their disposal to make it easier to commit crime. In the past, they posed as police officers, asked for help finding their lost dog, or just rang your doorbell. Today they can get intersectional feminists and international human rights organizations to argue on their behalf that locking your door or insisting on condoms is the real crime.