People think memetic viruses are just going to ask politely about infecting you, like the Jehovah’s Witnesses: “Hello, can I talk to you today about the importance of WWIII with Russia?”
No. Mind-viruses are not polite. They USE you. They use your empathy and compassion to make you feel like a shit person for rejecting them. They throw dying children in your face and demand that you start a war to save them.
They hijack your sense of yourself as a good person.
I call this the empathy trap.
Why did this take Stone Cold’s breath away? Why is it shocking?
It’s a basically true statement– the 3/5ths compromise originated in 1783 and was still around in 1789, when the 2nd Amendment was proposed–but soare “California became the 31st American state when I was deemed 3/5ths of a person,” “Napoleon invaded Russia when I was deemed 3/5ths of a person” and “The New York Times was founded, the safety elevator was invented, Massachusetts passed the nation’s first child employment laws, the first telegrams were sent, and Jane Eyre was published when I was deemed 3/5ths of a person.”
A lot happened between 1783 and 1861.
As unpleasant as the 3/5ths compromise is to think back on, we should remember that it was not passed because proponents thought black people only counted as “3/5ths of a person,” but because they didn’t want slave owners using census counts of non-voting slaves to get more votes for their states in the federal government. The 3/5ths compromise actually reduced the power of the slave-owning states relative to the non-slave owning states, in exchange for a break on taxes.
So this isn’t shocking because it’s factually true (I can come up with a whole list of equally true but unshocking statements) nor because the 3/5ths compromise was evil.
Perhaps it is shocking because it points out how old the 2nd Amendment is? But there are many other equally old–or older–things we find completely mundane. Mozart was writing operas in the 1790s; US copyright law began in the 1790s; Edward Jenner developed his smallpox vaccine in 1796; Benjamin Franklin invented the “swim fin” or flippers back in 1717. I don’t think anyone’s throwing out their flippers just because the concept is older than the entire country.
No; it’s shocking because “I was deemed 3/5ths of a person” appeals immediately to your sense of empathy.
Do you respond, “That doesn’t matter”?
“What do you mean, it doesn’t matter that I was considered only 3/5ths of a person? That matters a lot to me.”
“Oh, no, of course, I didn’t mean that it doesn’t matter like that, of course I understand that matters to you–”
Now you’re totally off-topic.
In order to see that this is a non sequitor, you first have to step back from the emotion. Push it aside, if you must. Yes, slavery was evil, but what does it have to do with the 2nd Amendment? Nothing. Reject the frame.
Mitochondrial memes are passed down from your parents and other trusted members of your family and community. You don’t typically have to be convinced of them; children tend to just believe their parents. That’s why you believed all of that business about Santa Claus. Meme viruses, by contrast, come from the wider community, typically strangers. Meme viruses have to convince you to adopt them, which can be quite a bit harder. This is why so many people follow their parents’ religion, and so few people convert to new religions as adults. Most religious transmission is basically mitochondrial–even if the Jehovah’s Witnesses show up at your doorstep fairly often.
To spread faster and more effectively, therefore, meme viruses have to convince you to lower your defenses and let them spread. They convince you that believing and spreading them is part of being a good person. They demand that if you really care about issue X, then you must also care about issue W, Y, and Z. “If you want to fight racism, you also have to go vegan, because all systems of oppression are intersectionally linked,” argues the vegan. “If you love Jesus, you must support capitalism because those godless commies hate Jesus.” Jesus probably also supported socialism and veganism, depending on whom you ask. “This photo of Kim Kardashian balancing a wine glass on her ass is problematic because once someone took a picture of a black woman in the same pose and that was racist.” “Al Qaeda launched an attack on 9-11, therefore we need to topple Saddam Hussein.” “A Serbian anarchist shot some Austro-Hungarian arch duke, therefore we need to have WWI.” “Assad used chemical weapons, therefore the US needs to go to war with Russia.”
Once you are sensitive to this method of framing, you’ll notice it fairly often.
Note: “Memes” on this blog is used as it is in the field of memetics, representing units of ideas that are passed from person to person, not in the sense of “funny cat pictures on the internet.”
“Mitochondrial memes” are memes that are passed vertically from parent to child, like “it’s important to eat your dinner before desert” or “brush your teeth twice a day or your teeth will rot out.”
“Meme viruses” (I try to avoid the confusing phrase, “viral memes,”) are memes that are transmitted horizontally through society, like chain letters and TV news.
I’ve spent a fair amount of time warning about some of the potential negative results of meme viruses, but today I’d like to discuss one of their greatest strengths: you can transmit them to other people without using them yourself.
Let’s start with genetics. It is very easy to quickly evolve in a particular direction if a variety of relevant traits already exist in a population. For example, humans already vary in height, so if you wanted to, say, make everyone on Earth shorter, you would just have to stop all of the tall people from reproducing. The short people would create the next generation, and it would be short.
But getting the adult human height below 3″ tall requires not just existing, normal human height variation, but exploiting random mutations. These are rare and the people who have them normally incur huge reductions in fitness, as they often have problems with bone growth, intelligence, and giving birth.
Most random mutations simply result in an organism’s death. Very few are useful, and those that are have to beat out all of the other local genetic combinations to actually stick around.
Suppose you happen to be born with a very lucky genetic trait: a rare mutation that lets you survive more easily in an arctic environment.
But you were born in Sudan.
Your genetic trait could be really useful if you could somehow give it away to someone in Siberia, but no, you are stuck in Sudan and you are really hot all of the time and then you die of heatstroke.
With the evolution of complex thought, humans (near alone among animals) developed the ability to go beyond mere genetic abilities, instincts, and impulses, and impart stores of knowledge to the next generation. Humanity has been accumulating mitochondrial memes for millions of years, ever since the first human showed another human how to wield fire and create stone tools. (Note: the use of fire and stone tools predates the emergence of homo Sapiens by a long while, but not the Homo genus.)
But mitochondrial memes, to get passed on, need to offer some immediate benefit to their holders. Humans are smart enough–and the utility of information unpredictable enough–that we can hold some not obviously useful or absurd ideas, but the bulk of our efforts have to go toward information that helps us survive.
(By definition, mitochondrial memes aren’t written down; they have to be remembered.)
If an idea doesn’t offer some benefit to its holder, it is likely to be quickly forgotten–even if it could be very useful to someone else.
Suppose one day you happen to have a brilliant new idea for how to keep warm in a very cold environment–but you live in Sudan. If you can’t tell your idea to anyone who lives somewhere cold, your idea will never be useful. It will die with you.
But introduce writing, and ideas of no use to their holder can be recorded and transmitted to people who can use them. For example, in 1502, Leonardo da Vinci designed a 720-foot (220 m) bridge for Ottoman SultanBeyazid II of Constantinople. The sultan never built Leonardo’s bridge, but in 2001, a bridge based on his design was finally built in Norway. Leonardo’s ideas for flying machines, while also not immediately useful, inspired generations of future engineers.
Viral memes don’t have to be immediately useful to stick around. They can be written down, tucked into a book, and picked up again a hundred years later and a thousand miles away by someone who can use them. A person living in Sudan can invent a better way to stay warm, write it down, and send it to someone in Siberia–and someone in Siberia can invent a better way to stay cool, write it down, and send it back.
Many modern scientific and technological advances are based on the contributions of not one or two or ten inventors, but thousands, each contributing their unpredictable part to the overall whole. Electricity, for example, was a mere curiosity when Thales of Miletus wrote about effects of rubbing amber to produce static electricity (the word “electricity” is actually derived from the Greek for “amber”;) between 1600 and 1800, scientists began studying electricity in a more systematic way, but it still wasn’t useful. It was only with the invention of the telegraph from many different electrical parts and systems, (first working model, 1816; first telegram sent in the US, 1838;) that electricity became useful. With the invention of electric lights and the electrical grids necessary to power them (1870s and 80s,) electricity moved into people’s homes.
The advent of meme viruses has thus given humanity two gifts: 1. People can use technology like books and the internet to store more information than we can naturally, like external hard-drives for our brains; and 2. we can preserve and transmit ideas that aren’t immediately useful to ourselves to people who can use them.
Homo sapiens is about 200-300,000 years old, depending on exactly where you draw the line between us and our immediate ancestors. Printing (and eventually mass literacy) only got going about 550 years ago, with the development of the Gutenberg press. TV, radio, movies, and the internet only became widespread within the past century, and internet in the past 25 years.
In other words, for 99.99% of human history, “mass media” didn’t exist.
How did illiterate peasants learn about the world, if not from books, TV, or Youtube videos? Naturally, from each other: parents passed knowledge to children; tribal elders taught their wisdom to other members of their tribes; teenagers were apprenticed to masters who already knew a trade, etc.
A hundred years ago, if you wanted to know how to build a wagon, raise a barn, or plant corn, you generally had to find someone who knew how to do so and ask them. Today, you ask the internet.
Getting all of your information from people you know is limiting, but it has two advantages: you can easily judge whether the source of your information is reliable, (you’re not going to take farming advice from your Uncle Bob whose crops always fail,) and most of the people giving you information have your best interests at heart.
The internet’s strength is that it lets us talk to people from outside our own communities; it’s weakness is that this makes it much easier for people (say, Nigerian princes with extra bank accounts,) to get away with lying. They also have no particular interest one way or another in your survival–unlike your parents.
In a mitochondrial memetic environment (that is, an environment where you get most of your information from relatives,) memes that could kill you tend to get selected against: parents who encourage their children to eat poison tend not to have grandchildren. From an evolutionary perspective, deadly memes are selected against in a mitochondrial environment; memes will evolve to support your survival.
By contrast, in a viral meme environment, (that is, an environment where ideas can easily pass from person to person without anyone having to give birth,) your personal survival is not all that important to the idea’s success.
So one of the risks of viral memes is getting scammed: memetically, infected by an idea that sounds good but actually benefits someone else at your expense.
In the mitochondrial environment, we expect people to be basically cautious; in the viral, less cautious.
Suppose we have two different groups (Group A and Group B) interacting. 25% of Group B is violent criminals, versus 5% of Group A. Folks in group A would quite logically want to avoid Group B. But 75% of Group B is not violent criminals, and would logically not want to be lumped in with criminals. (For that matter, neither do the 25% who are.)
In an ideal world, we could easily sort out violent criminals from the rest of the population, allowing the innocent people to freely associate. In the real world, we have to make judgment calls. Lean a bit toward the side of caution, and you exclude more criminals, but also more innocents; lean the opposite direction and innocent people have an easier time finding jobs and houses, but more people get killed by criminals.
Let’s put it less abstractly: suppose you are walking down a dimly-lit street at night and see a suspicious looking person coming toward you. It costs you almost nothing to cross the street to avoid them, while not crossing the street could cost you your life. The person you avoided, if they are innocent, incurs only the expense of potentially having their feelings hurt; if they are a criminal, they have lost a victim.
Companies also want to avoid criminals, which makes it hard for ex-cons to get jobs (which is an issue if we want folks who are no longer in prison to have an opportunity to earn an honest living besides going on welfare.) Unfortunately, efforts to improve employment chances for ex-cons by preventing employers from inquiring directly about criminal history have resulted in employers using rougher heuristics to exclude felons, like simply not hiring young African American males. Since most companies have far more qualified job applicants than available jobs, the cost to them of excluding young African American males is fairly low–while the cost to African Americans is fairly high.
One of the interesting things about the past 200 years is the West’s historically unprecedented shift from racial apartheid/segregation and actual race-based slavery to full legal (if not always de facto) racial integration.
One of the causes of this shift was doubtless the transition from traditional production modes like farming and horticulture to the modern, industrial economy. Subsistence farming didn’t require a whole lot of employees. Medieval peasants didn’t change occupations very often: most folks ended up working in the same professions as their parents, grandparents, and great-grandparents (usually farming,) probably even on the same estate.
It was only with industrialization that people and their professions began uncoupling; a person could now hold multiple different jobs, in different fields, over the span of years.
Of course, there were beginnings of this before the 1800s–just as people read books before the 1800s–but accelerating technological development accelerated the trends.
But while capitalists want to hire the best possible workers for the lowest possible wages, this doesn’t get us all the way to the complete change we’ve witnessed in racial mores. After all, companies don’t want to hire criminals, either, and any population that produces a lot of criminals tends not to produce a whole lot of really competent workers.
However, the rise of mass communication has allowed us to listen to and empathize with far more people than ever before. When Martin Luther King marched on Washington and asked to be judged by the content of his character rather than the color of his skin, his request only reached national audiences because of modern media, because we now live in a society of meme viruses. And it worked: integration happened.
Also, crime went up dramatically:
While we’re at it:
Integration triggered a massive increase in crime, which only stopped because… well, we’re not sure, but a corresponding massive increase in the incarceration rate (and sentences) has probably stopped a lot of criminals from committing additional crimes.
Most of these homicides were black on black, but plenty of the victims were white, even as they sold their devalued homes and fled the violence. (Housing integration appears to have struck America’s “ethnic” neighborhoods of Italians, Irish, and Jews particularly hard, destroying coherent communities and, I assume, voting blocks.)
From the white perspective, integration was tremendously costly: people died. Segregation might not be fair, it might kill black people, but it certainly prevented the murder of whites. But segregation, as discussed, does have some costs for whites: you are more limited in all of your transactions, both economic and personal. You can’t sell your house to just anyone you want. Can’t hire anyone you want. Can’t fall in love with anyone you want.
But obviously segregation is far more harmful to African Americans.
Despite all of the trouble integration has caused for whites, the majority claim to believe in it–even though their feet tell a different story. This at least superficial change in attitudes, I believe, was triggered by the nature of the viral memetic environment.
Within the mitochondrial meme environment, you listen to people who care about your survival and they pass on ideas intended to help you survive. They don’t typically pass on ideas that sacrifice your survival for the sake of others, at least not for long. Your parents will tell you that if you see someone suspicious, you should cross the street and get away.
In the viral environment, you interact far more with people who have their own interests in mind, not yours, and these folks would be perfectly happy for you to sacrifice your survival for their sake. The good folks at Penn State would like you to know that locking your car door when a black person passes by is a “microaggression:”
Former President Obama once said in his speech that he was followed when he was shopping in a store, heard the doors of cars locked as he was walking by, and a woman showed extremely nervousness as he got on an elevator with him (Obama, 2013). Those are examples of nonverbal microaggressions. It is disturbing to learn that those behaviors are often automatic that express “put-downs” of individuals in marginalized groups (Pierce et al., 1977). What if Obama were White, would he receive those unfair treatments?
(If Obama were white, like Hillary Clinton, he probably wouldn’t have been elected president.)
For some reason, black people shoplifting, carjacking, or purse-snatching are never described as “microaggressions;” a black person whose feelings are hurt has been microaggressed, but a white person afraid of being robbed or murdered has not been.
This post was actually inspired by an intra-leftist debate:
Shortly after the highly successful African-star-studded movie Black Panther debuted, certain folks, like Faisal Kutty, started complaining that the film is “Islamophobic” because of a scene where girls are rescued from a Boko Haram-like organization.
Never mind that Boko Haram is a real organization, that it actually kidnaps girls, that it has killed more people than ISIS and those people it murders are Africans. Even other Black African Muslims think Boko Haram is shit. (Though obviously BH has its supporters.)
Here we have two different groups of people with different interests: one, Muslims with no particular ties to Africa who don’t want people to associate them with Boko Haram, and two, Black Muslims who don’t want to get killed by folks like Boko Haram.
It is exceedingly disingenuous for folks like Faisal Kutty to criticize as immoral an accurate portrayal of a group that is actually slaughtering thousands of people just because he might accidentally be harmed by association. More attention on Boko Haram could save lives; less attention could result in more deaths–the dead just wouldn’t be Kutty, who is safe in Canada.
Without mass media, I don’t think this kind of appeal works: survival memes dominate and people take danger very seriously. “Some stranger in Canada might be inconvenienced over this” loses to “these people slaughter children.” With mass media, the viral environment allows appeals to set aside your own self-interest and ignore danger in favor of “fairness” and “equality” for everyone in the conversation to flourish.
So far this post has focused primarily on the interests of innocent people, but criminals have interests, too–and criminals would like you to make it easier for them to commit crime.
Simon Mol (6 November 1973 in Buea, Cameroon – 10 October 2008) was the pen name of Simon Moleke Njie, a Cameroon-born journalist, writer and anti-racist political activist. In 1999 he sought political asylum in Poland; it was granted in 2000, and he moved to Warsaw, where he became a well-known anti-racist campaigner. …
In 2005 he organized a conference with Black ambassadors in Poland to protest the claims in an article in Wiedza i Życie by Adam Leszczyński about AIDS problems in Africa, which quoted research stating that a majority of African women were unable to persuade their HIV positive husbands to wear condoms, and so later got caught HIV themselves. Mol accused Leszczyński of prejudice because of this publication. …
Honorary member of the British International Pen Club Centre.
In 2006 Mol received the prestigious award “Oxfam Novib/PEN Award for Freedom of Expression”.
In February 2006, further to his partner’s request for him to take an HIV test, Mol declined and published a post on his blog explaining why not:
Character assassination isn’t a new phenomenon. However, it appears here the game respects no rules. It wouldn’t be superfluous to state that there is an ingrained, harsh and disturbing dislike for Africans here. The accusation of being HIV positive is the latest weapon that as an African your enemy can raise against you. This ideologically inspired weapon, is strengthened by the day with disturbing literature about Africa from supposed-experts on Africa, some of whom openly boast of traveling across Africa in two weeks and return home to write volumes. What some of these hastily compiled volumes have succeeded in breeding, is a social and psychological conviction that every African walking the street here is supposedly HIV positive, and woe betide anyone who dares to unravel the myth being put in place.
On the 3rd of January 2007 Mol was taken into custody by the Polish police and charged with infecting his sexual partners with HIV. …
According to the Rzeczpospolita newspaper, he was diagnosed with HIV back in 1999 while living in a refugee shelter, but Polish law does not force an HIV carrier to reveal his or her disease status.
According to the police inspector who was investigating his case, a witness stated that Mol refused to wear condoms during sex. An anonymous witness in one case said that he accused a girl who demanded he should wear them that she was racist because as he was Black she thought he must be infected with HIV. After sexual intercourse he used to say to his female partners that his sperm was sacred.
In an unusual move, his photo with an epidemiological warning, was ordered to be publicly displayed by the then Minister of Justice Zbigniew Ziobro. MediaWatch, a body that monitors alleged racism, quickly denounced this decision, asserting that it was a breach of ethics with racist implications, as the picture had been published before any court verdict. They saw it as evidence of institutional racism in Poland, also calling for international condemnation. …
After police published Mol’s photo and an alert before the start of court proceedings, Warsaw HIV testing centers were “invaded by young women”. A few said that they knew Mol. Some of the HIV tests have been positive. According to the police inspector who had been monitoring the tests and the case: “Some women very quickly started to suffer drug-resistant tonsillitis and fungal infections. They looked wasted, some lost as many as 15 kilograms and were deeply traumatized, impeding us taking the witness statements. 18 additional likely victims have been identified thereby”. Genetic tests of the virus from the infectees and Simon proved that it was specific to Cameroon.
In other words, Simon Mol was a sociopath who used the accusation of “racism” to murder dozens of women.
Criminals–of any race–are not nice people. They will absolutely use anything at their disposal to make it easier to commit crime. In the past, they posed as police officers, asked for help finding their lost dog, or just rang your doorbell. Today they can get intersectional feminists and international human rights organizations to argue on their behalf that locking your door or insisting on condoms is the real crime.
As ANI (Asian News International) reports on Twitter (h/t Rohit):
For those of you reading this in the future, after the 15 minutes of manufactured furor have subsided, #MarcyForOurLives is an anti-guns/pro-gun control movement in the US. Gun laws in India are notably much stricter than gun laws in the US, and yet–
The thing that looks like a mushroom is the internal part of a uterus; you can see the rest of the drawing faintly around it. As noted, this is completely backwards from the reality in India, where it is nearly impossible to buy a gun but abortions are extremely common and completely legal. So where did the marchers in Mumbai get this sign?
Well, it’s a meme, found on Twitter, instagram, t-shirts, and of course signs at pussyhat rallies in the US. It’s not even true in the US, but at least it kind of makes sense given our frequent debates over both guns and abortions. Certainly there are some people in the US who think abortions should be completely illegal. India, by contrast, is a nation where slowing the growth rate to prevent famine is a high priority and abortions are quite legal.
I am reminded of that time Michelle Obama tweeted #BringBackOurGirls in support of Nigerians kidnapped by Boko Haram:
This is the signature of a mind-virus: it makes you repeat things that make no sense in context. It makes you spread the virus even though it does not make logical sense for you, personally, to spread it. Michelle Obama is married to a man who controlled, at the time, the world’s largest military, including an enormous stockpile of nuclear weapons, and yet she was tweeting ineffective hashtags to be part of the #movement.
Likewise, the state of gun (and abortion) laws in India is nothing like their state in the US, yet Indians are getting sucked into spreading our viral memes.
Horizontal meme transfer–like social media–promotes the spread of memetic viruses.
I do feel, quite deeply, that America is changing rapidly; a certain old essence is disappearing, even faster than when I was young.
In such cases I think of my father, an old-stock American, Vietnam vet, lover of God, Guns, and Glory–basically all your red state stereotypes.
While chatting with parents down at the local playground, one of the moms claimed to “love” her HOA. Why? I inquired, distressed, because all mine does is wreck the landscaping and eliminate parking. After a moment’s thought, she responded that the HOA prevents people from leaving their trash cans out overnight and stops them from painting their houses strange colors.
Goodnight! Who joins an organization just to meddle with their neighbors?
Of course there are corners of America where people still mind their own business, but we are increasingly squashed into corporate-molded cities where neighbors spend more time worrying about their property values than interacting.
Anyway, I tracked down the book I referenced in the previous post: Childcraft, Volume 11: Music for the Family, with copyrights from 1923-1954 (presumably the copy I hold hails from ’54, as its photos are that era, but the text may be somewhat older.)
Most of the book is children’s songs, but there is a section at the end with biographies of famous composers: Bach, Handel, Haydn, Mozart, Beethoven, Schubert,Chopin, Verdi, Brahms, Tchaikovsky, Grieg, Humperdinck, MacDowell, Debussy, Sousa, and Gershwin. Here are a few excerpts:
“No!” said Father Handel sternly. My boy shall never be a musician!”
In that day in Germany, musicians were often treated like servants. Father Handel wanted his son to be an important man, not a servant. It was splendid to be a barber-surgeon–like Father Handel–and be called to the castle to trim the duke’s mustache or treat his indigestion. It was even more splendid to be a lawyer, and earn rich fees for giving advice to a prince or a king. But little George Frederick Handel wanted only to be a musician.
In the same year that George Washington was born, an Austrian peasant family named Haydn celebrated the birth of a fair-haired baby boy. They named him Joseph.
Joseph’s father made wheels for wagons and coaches. His mother was a cook for noble families. both parents loved music. In the evenings, by candlelight, the family often sang songs of the people, or folk melodies…
At one time Haydn played a joke on the powerful Prince Esterhazy, who had hired him as music director. The prince kept his musicians at a palace in the country. He seldom allowed them a vacation. Many of the musicians longed to visit their families. Haydn wished that he might help them. But he did not see what he could do. He did not dare speak directly to the prince about it.
One day Haydn announced that he had written a new symphony. Prince Esterhazy and his court gathered in the great hall of the palace to listen. As the orchestra began the final movement, one by one the players blew out the candle on their music stands and left the hall. Finally only two violinists were playing. They they too departed, and only the director remained.
Haydn turned and bowed to the prince. “Your Grace,” he said, “I call this the Farewell Symphony.”
The prince looked perplexed, then began to smile at Haydn’s musical prank.
“I can take a hint from old Haydn,” he said “The musicians may start their vacation tomorrow.” As you may imagine, all the musicians were grateful to their beloved “Papa Haydn.”
By the time Wolfgang was twelve years old, he had played in many great cities of Europe. He was the favorite of queens and princesses. Princes and kings gave him money and jewels. Many musicians envied the young Mozart, because it was then the custom to teat musicians like servants.
It would seem that Mozart’s early life was just one gay adventure. But the boy grew very wise about kings and queens, princes and princesses. He learned that kings and noblemen were just like ordinary people. Some were wise and just. Others were stupid and cruel. Some princesses were gracious and kind. But others had very bad manners, and sometimes young Mozart told them so. He knew that many ordinary persons had better manners and were better people than some of the nobility.
Mozart began to believe that bad and stupid kings had no right to tell people what to do. These were dangerous thoughts, for king often punished person who had ideas about freedom. Mozart put hi ideas into music, rather than speech.
When Mozart grew to manhood, he wrote operas which poked fun at king and noblemen. One of these operas is the Marriage of Figaro, which had many lilting melodies. Another is Don Giovanni, in which we hear the lovely “Minuet.”
The music Beethoven wrote shows that he loved people, because it is written for all the people, and not merely for king and princes. But Beethoven also felt that cruel people had bought much evil into the world. he was happiest when he could be outdoors, in rain or sunshine, and listen to the songs of Nature.
The Patriot Composer of Poland
Father Chopin began a merry Polish folk tune on his flute. Little Frederic sat still and listened. Soon a tear rolled own his cheek and dropped on his blouse.
The music of the flute rose higher. It danced like a happy peasant girl. It trilled and shistled like the song of a bird. Little Frederic’s chin began to tremble. He opened his mouth wide and began to cry.
Father and Mother Chopin loved Frederic deeply. But they also loved music, and they were sad because their little son seemed to dislike it so. …
Upstairs, the boy who should have been asleep lay awake listening. He squeezed his pillow tight against his eyes to keep the tears back. How could they ay he he hated music! His tears were not tears of pain, but of joy. Frederic loved music so much that the sound of it made him weep. But he was so young that he could not find the words to tell his parents how he felt. …
Young Chopin began to compose his own music almost as soon as he could play the piano. His compositions were influenced by the kinds of music his parents loved best. His father had come from France, and often played the music of that country on his flute. Frederick liked the French music, but most of all he loved the songs his mother sang–songs of his native Poland. It is the Polish music he wrote that is most popular.
Frederic’s mother told him that Poland had once been a proud and free country. Then neighbor nations had taken away its freedom. The Polish people remembered the days when their country was free, and sang songs about the land they loved. Frederic used these national songs in his compositions for the piano. …
Chopin’s love for his country speaks through his music, like a beautiful language which the people of all countries can understand. Chopin’s stirring music still has the power to make strong men and women of any country weep, just as a little boy wept over a Polish folk tune many years ago.
Now let’s take a look at Mathematicians are People, Too: Stories from the lives of the great mathematicians (copyright 1990). (I would like to note that this is not a bad book; I am just trying to highlight the change in political tone/emphasis over the decades.) It covers Thales, Pythagoras, Archimedes, Hypatia, Napier, Galileo, Pascal, Newton, Eurler, Lagrange, Sophie Germain, Gaus, Galois, Amalie (Emmy) Noether, and Ramanujan.
There is a sequel which I have not yet read, published in 1995, which covers Euclid, Omar Khayyam, Fibonacci, Descartes, Fermat, Cardano, Maria Agnesi, Benjamin Banneker, Mary Somerville, Ada Lovelace, Babbage, Sonya Kovalesky, Neils Abel, George Polya, and Einstein.
But Hypatia was not only a well-known scientist and mathematician’ she also became a highly respected philosopher. Her father had taught her to be open-minded about ideas. Like many Greeks, he believed people should keep questioning rather than settle on one version of truth as final. He introduced her to a variety of religions, and she learned to value the good in each. Because of this, he taught her students to ask lots of question, even about ideas that government or religious leaders said they should not question. Eventually, this caused trouble for Hypatia.
Hypatia got caught in the middle of a struggle between two leaders in Alexandria. Orestes, prefect or governor of Alexandria, was Hypatia’s friend. They enjoyed talking together and often wrote letters about the latest ideas. Cyril was the archbishop of Alexandria, the head of the Christian church in that city. He was suspicious of anyone who did not accept his religious views. Conflict developed between the two men and their followers, and Cyril became convinced that Hypatia was behind it. …
An angry mob of religious fanatics, fired up by false rumors of Hypatia’s teaching, kidnapped her one day as she rode through town on her chariot. They dragged her through the streets to the cathedral, where she was brutally murdered and he bones burned. Her death marks the end of the great age of Greek Mathematics. …
Although Hypatia made many important contributions to mathematics and science, few women have adopted her interests–until recently. Some historians believe that Hypatia’s horrible death may have discouraged other women from becoming mathematician. Still others believe that Hypatia’s life–not her death–is the perfect symbol of what women or men can achieve when they work hard and stand up for what they believe is right.
(A lot of mathematicians in this book, including Pythagoras, Hypatia, and Archimedes, were murdered. Apparently mathematician is a much more dangerous profession than composer.)
Lagrange’s influence was beginning to be felt throughout the scientific communities of Europe. King Frederick of Prussia had formed a prestigious college of mathematics in Berlin. Frederick sent this rather impressive invitation to Lagrange: “The greatest king in Europe must have the greatest mathematician in Europe in his court!”
Clearly, Frederick was not as modest as Lagrange, but he was an avid supporter of science and mathematics. …
Lagrange was quick to praise persons who had encouraged or influenced him. He applauded when Napoleon ordered a tribute to Lagrange’s father, still living in Italy. He acknowledged the greatness of Euler, He mourned with the chemist Lavoisier was sentenced to death by guillotine. And just as he recognized those who had affirmed him, he was quick to encourage younger mathematicians.
Once, while teaching at the Ecole Polytechnique, he received and impressive paper from Monsier LeBlanc. … After some research, he discovered that the mystery student was really a young woman named Sophie Germain. Only men were allowed at the Ecole, so Sophie had borrowed lecture notes from friends and asked them to smuggle her paper in among theirs. Lagrange went immediately to her home and made her feel like a true mathematician, helping launch her important career.
When Sophie was very young, her parents had welcomed her interest. They allowed her to use her father’s library whenever she wished. But soon they decided that she was studying too much. They agreed with the popular notion that “brainwork” was not healthy–maybe even dangerous–for girls. They told Sophie that he could not study mathematics anymore.
But Sophie would not give up. Night after night she crawled out of bed and studied after everyone else had gone to sleep. …
“Oh, Father, I’m so sorry, but I just can’t stop,” Sophie cried. “These problems are so fascinating! When I work on them I feel like I’m really alive.”
“But, Sophie,” her mother said softly, “remember, you’re a girl. It isn’t good for you to fill your mind with numbers.” …
With that her parents gave up. Sophie was allowed to study to her heart’s content. Fortunately, her father had an excellent library. As wealthy citizens, the Germain family knew many educated people in Paris and throughout France.
When Sophie was young, however, traveling and visiting were restricted by the political turmoil in France. The French Revolution began in 1789 when she was thirteen, and Paris was an unstable and dangerous city… Sophie’s parents shielded her from the fighting and conflict. She eagerly filled her time reading and learning. …
In 1816 mathematicians and scientists around the world heard about Sophie Germain. In that year she won the grand prize from the French Academy for her work on the law of vibrating elastic surfaces…
Sophie Germain enjoyed only a brief moment of recognition for a lifetime of dedicated study. The barriers to women in mathematics certainly hampered Germain’s development–but they did not prevent her from following her quest.
Galois could have coped with normal disappointments, but so many setbacks took their toll on him. Bitterness filled him He began to distrust all teachers and all institutions. He tried starting his own school, but no one enrolled. Then, because he wanted to fight injustice, he got involved in politics. He joined the Republicans, a forbidden radical group. They spoke out for justice, especially for the poor, and for freedom of the press. They wanted a better standard of living for the common people, instead of for the wealthy few.
Galois ended up in prison for his political activities, then got killed in a duel at the age of 20.
My goal isn’t to dissect the truth of these stories (often children’s biographies are at least a bit fictionalized), but to examine what the authors chose to highlight. We are often don’t even notice the political beliefs of our own age (“Of course they did it that way. It’s only natural,”) but can easily see the politics of another age.
The cover of the Childcraft book on music features two children holding a book (on the book’s cover are two more children, holding a book…) Mathematicians are People, Too, features Amalie Noether happily studying math while her flustered mother (dressed like a maid) looks on in consternation. Volume two has African American Benjamin Banneker on its cover. (Silly me, I would have put Euclid and Newton on the covers and probably not had as many sales.)
It took a bit of digging to find the full list of mathematicians in Volume 2–the book’s blurb on Amazon only lists Omar Khayyam, Albert Einstein*, Ada Lovelace, and “others.” Clearly, during the production of Volume 1, the authors were thinking about how to emphasize women in mathematics; by Volume 2, they wanted to emphasize diversity. The publishers didn’t even think it worthwhile to list Euclid!
*I love Einstein as much as the next guy, but he’s not a mathematician.
To be fair, there are probably more people looking for biographies of Ada Lovelace or Einstein than of Euclid, though personally I spend a fair amount of time thinking “When do we start Euclid? Is there a children’s version of his Elements?” and not much time thinking, “When do we start Ada Lovelace?”
So one of the major difference between these two works lies not in the explicit phrasing of the stories, but in the frame of the particular people they chose to highlight. Why Benjamin Banneker? Unlike Omar Khayyam, he didn’t contribute very much to mathematics, and we have not exhausted our list of great mathematicians such that we need to go searching for obscure ones. Surely Turing, Erdos, von Neuman, al-Khwarizmi, or Aryabhata contributed far more–but perhaps that doesn’t matter, as the book’s target market can hardly understand advanced math in the first place. Banneker was chosen because the authors believe that it is important to have an African American character in order to appeal to African American readers.
The conclusion of Hypatia’s story is more explicitly political–Hypatia wasn’t killed because she was a female mathematician and her story certainly hasn’t discouraged women from doing math–if the authors thought it did, they wouldn’t have put it in the book!
Do the political messages in children’s books matter? Do they create culture, or are they created by culture? Chickens and eggs. Either way, culture has changed. Politics have changed. People have changed. Technology has changed.
1950s civics class didn’t happen in a vacuum–and I don’t think the political culture that created it is coming back.
The material-grievances theory and the cultural-resentments theory can fit together because, in both cases, they tell us that people voted for Trump out of a perceived self-interest, which was to improve their faltering economic and material conditions, or else to affirm their cultural standing vis-à-vis the non-whites and the bicoastal elites. Their votes were, from this standpoint, rationally cast. … which ultimately would suggest that 2016’s election was at least a semi-normal event, even if Trump has his oddities. But here is my reservation.
I do not think the election was normal. I think it was the strangest election in American history in at least one major particular, which has to do with the qualifications and demeanor of the winning candidate. American presidents over the centuries have always cultivated, after all, a style, which has been pretty much the style of George Washington, sartorially updated. … Now, it is possible that, over the centuries, appearances and reality have, on occasion, parted ways, and one or another president, in the privacy of his personal quarters, or in whispered instructions to his henchmen, has been, in fact, a lout, a demagogue, a thug, and a stinking cesspool of corruption. And yet, until just now, nobody running for the presidency, none of the serious candidates, would have wanted to look like that, and this was for a simple reason. The American project requires a rigorously republican culture, without which a democratic society cannot exist—a culture of honesty, logic, science, and open-minded debate, which requires, in turn, tolerance and mutual respect. Democracy demands decorum. And since the president is supposed to be democracy’s leader, the candidates for the office have always done their best to, at least, put on a good act.
The author (Paul Berman) then proposes Theory III: Broad Cultural Collapse:
A Theory 3 ought to emphasize still another non-economic and non-industrial factor, apart from marriage, family structure, theology, bad doctors, evil pharmaceutical companies, and racist ideology. This is a broad cultural collapse. It is a collapse, at minimum, of civic knowledge—a collapse in the ability to identify political reality, a collapse in the ability to recall the nature of democracy and the American ideal. An intellectual collapse, ultimately. And the sign of this collapse is an inability to recognize that Donald Trump has the look of a foreign object within the American presidential tradition.
Berman is insightful until he blames cultural collapse on the educational system (those dastardly teachers just decided not to teach about George Washington, I guess.)
We can’t blame education. Very few people had many years of formal education of any sort back in 1776 or 1810–even in 1900, far fewer people completed highschool than do today. The idea that highschool civics class was more effectively teaching future voters what to look for in a president in 1815 than today therefore seems unlikely.
If anything, in my (admittedly limited, parental) interactions with the local schools, education seem to lag national sentiment. For example, the local schools still cover Columbus Day in a pro-Columbus manner (and I don’t even live in a particularly conservative area) and have special Veterans’ Day events. School curricula are, I think, fairly influenced by the desires of the Texas schools, because Texas is a big state that buys a lot of textbooks.
I know plenty of Boomers who voted for Trump, so if we’re looking at a change in school curricula, we’re looking at a shift that happened half a century ago (or more,) but only recently manifested.
That said, I definitely feel something coursing through society that I could call “Cultural Collapse.” I just don’t think the schools are to blame.
Yesterday I happened across children’s book about famous musicians from the 1920s. Interwoven with the biographies of Beethoven and Mozart were political comments about kings and queens, European social structure and how these musicians of course saw through all of this royalty business and wanted to make music for the common people. It was an articulated ideology of democracy.
Sure, people today still think democracy is important, but the framing (and phrasing) is different. The book we recently read of mathematicians’ biographies didn’t stop to tell us how highly the mathematicians thought of the idea of common people voting (rather, when it bothered with ideology, it focused on increasing representation of women in mathematics and emphasizing the historical obstacles they faced.)
According to the Mounk-Foa early-warning system, signs of democratic deconsolidation in the United States and many other liberal democracies are now similar to those in Venezuela before its crisis.
Across numerous countries, including Australia, Britain, the Netherlands, New Zealand, Sweden and the United States, the percentage of people who say it is “essential” to live in a democracy has plummeted, and it is especially low among younger generations. …
Support for autocratic alternatives is rising, too. Drawing on data from the European and World Values Surveys, the researchers found that the share of Americans who say that army rule would be a “good” or “very good” thing had risen to 1 in 6 in 2014, compared with 1 in 16 in 1995.
That trend is particularly strong among young people. For instance, in a previously published paper, the researchers calculated that 43 percent of older Americans believed it was illegitimate for the military to take over if the government were incompetent or failing to do its job, but only 19 percent of millennials agreed. The same generational divide showed up in Europe, where 53 percent of older people thought a military takeover would be illegitimate, while only 36 percent of millennials agreed.
Note, though, that this is not a local phenomenon–any explanation that explains why support for democracy is down in the US needs to also explain why it’s down in Sweden, Australia, Britain, and the Netherlands (and maybe why it wasn’t so popular there in the first place.)
Here are a few different theories besides failing schools:
Less common culture, due to integration and immigration
More international culture, due to the internet, TV, and similar technologies
Put yourself in your grandfather or great-grandfather’s shoes, growing up in the 1910s or 20s. Cars were not yet common; chances were if he wanted to go somewhere, he walked or rode a horse. Telephones and radios were still rare. TV barely existed.
If you wanted to talk to someone, you walked over to them and talked. If you wanted to talk to someone from another town, either you or they had to travel, often by horse or wagon. For long-distance news, you had newspapers and a few telegraph wires.
News traveled slowly. People traveled slowly (most people didn’t ride trains regularly.) Most of the people you talked to were folks who lived nearby, in your own community. Everyone not from your community was some kind of outsider.
During World War II, for example, three German submariners escaped from Camp Crossville, Tennessee. Their flight took them to an Appalachian cabin, where they stopped for a drink of water. The mountain granny told them to git.” When they ignored her, she promptly shot them dead. The sheriff came, and scolded her for shooting helpless prisoners. Granny burst into tears, and said that she wold not have done it if she had known the were Germans. The exasperated sheriff asked her what in “tarnation” she thought she was shooting at. “Why,” she replied, “I thought they was Yankees!”
And then your grandfather got shipped out to get shot at somewhere in Europe or the Pacific.
Today, technology has completely transformed our lives. When we want to talk to someone or hear their opinion, we can just pick up the phone, visit facebook, or flip on the TV. We have daily commutes that would have taken our ancestors a week to walk. People expect to travel thousands of miles for college and jobs.
The effect is a curious inversion: In a world where you can talk to anyone, why talk to your neighbors? Personally, I spend more time talking to people in Britain than the folks next door, (and I like my neighbors.)
Now, this blog was practically founded on the idea that this technological shift in the way ideas (memes) are transmitted has a profound effect on the kinds of ideas that are transmitted. When ideas must be propagated between relatives and neighbors, these ideas are likely to promote your own material well-being (as you must survive well enough to continue propagating the idea for it to go on existing,) whereas when ideas can be easily transmitted between strangers who don’t even live near each other, the ideas need not promote personal survival–they just need to sound good. (I went into more detail on this idea back in Viruses Want you to Spread Them, Mitochondrial Memes, and The Progressive Virus.)
How do these technological shifts affect how we form communities?
In a groundbreaking book based on vast data, Putnam shows how we have become increasingly disconnected from family, friends, neighbors, and our democratic structures– and how we may reconnect.
Putnam warns that our stock of social capital – the very fabric of our connections with each other, has plummeted, impoverishing our lives and communities.
Putnam draws on evidence including nearly 500,000 interviews over the last quarter century to show that we sign fewer petitions, belong to fewer organizations that meet, know our neighbors less, meet with friends less frequently, and even socialize with our families less often. We’re even bowling alone. More Americans are bowling than ever before, but they are not bowling in leagues. Putnam shows how changes in work, family structure, age, suburban life, television, computers, women’s roles and other factors have contributed to this decline.
The National Science Foundation (NSF) reported in its General Social Survey (GSS) that unprecedented numbers of Americans are lonely. Published in the American Sociological Review (ASR) and authored by Miller McPhearson, Lynn Smith-Lovin, and Matthew Brashears, sociologists at Duke and the University of Arizona, the study featured 1,500 face-to-face interviews where more than a quarter of the respondents — one in four — said that they have no one with whom they can talk about their personal troubles or triumphs. If family members are not counted, the number doubles to more than half of Americans who have no one outside their immediate family with whom they can share confidences. Sadly, the researchers noted increases in “social isolation” and “a very significant decrease in social connection to close friends and family.”
Rarely has news from an academic paper struck such a responsive nerve with the general public. These dramatic statistics from ASR parallel similar trends reported by the Beverly LaHaye Institute — that over the 40 years from 1960 to 2000 the Census Bureau had expanded its analysis of what had been a minor category. The Census Bureau categorizes the term “unrelated individuals” to designate someone who does not live in a “family group.” Sadly, we’ve seen the percentage of persons living as “unrelated individuals” almost triple, increasing from 6 to 16 percent of all people during the last 40 years. A huge majority of those classified as “unrelated individuals” (about 70 percent) lived alone.
Long-run data from the US, where the General Social Survey (GSS) has been gathering information about trust attitudes since 1972, suggests that people trust each other less today than 40 years ago. This decline in interpersonal trust in the US has been coupled with a long-run reduction in public trust in government – according to estimates compiled by the Pew Research Center since 1958, today trust in the government in the US is at historically low levels.
Interpersonal trust attitudes correlate strongly with religious affiliation and upbringing. Some studies have shown that this strong positive relationship remains after controlling for several survey-respondent characteristics.1This, in turn, has led researchers to use religion as a proxy for trust, in order to estimate the extent to which economic outcomes depend on trust attitudes. Estimates from these and other studies using an instrumental-variable approach, suggest that trust has a causal impact on economic outcomes.2 This suggests that the remarkable cross-country heterogeneity in trust that we observe today, can explain a significant part of the historical differences in cross-country income levels.
Measures of trust from attitudinal survey questions remain the most common source of data on trust. Yet academic studies have shown that these measures of trust are generally weak predictors of actual trusting behaviour. Interestingly, however, questions about trusting attitudes do seem to predict trustworthiness. In other words, people who say they trust other people tend to be trustworthy themselves.3
Our technological shifts haven’t just affected ideas and conversations–with people able to travel thousands of miles in an afternoon, they’ve also affected the composition of communities. The US in 1920 was almost 90% white and 10% black, (with that black population concentrated in the segregated South). All other races together totaled only a couple percent. Today, the US is <65% white, 13% black, 16% Hispanic, 6% Asian and Native American, and 9% “other” or multi-racial.
Similar changes have happened in Europe, both with the creation of the Free Movement Zone and the discovery that the Mediterranean isn’t that hard to cross, though the composition of the newcomers obviously differs.
Diversity may have its benefits, but one of the things it isn’t is a common culture.
With all of these changes, do I really feel that there is anything particularly special about my local community and its norms over those of my British friends?
What about Disney?
Well, Disney’s most profitable product hasn’t exactly been pro-democracy, though I doubt a few princess movies can actually budge people’s political compasses or vote for Trump (or Hillary.) But what about the general content of children’s stories? It sure seems like there are a lot fewer stories focused on characters from American history than in the days when Davy Crockett was the biggest thing on TV.
Of course this loops back into technological changes, as American TV and movies are enjoyed by an increasingly non-American audience and media content is driven by advertisers’ desire to reach specific audiences (eg, the “rural purge” in TV programming, when popular TV shows aimed at more rural or older audiences were cancelled in favor of programs featuring urban characters, which advertisers believed would appeal to younger viewers with more cash to spend.)
If cultural collapse is happening, it’s not because we lack for civics classes, but because civics classes alone cannot create a civic culture where there is none.
I don’t want to be one of those people who just gets attached to whatever was on the radio when they were 14 years old (or 18, or whenever) and never learns to like anything else because that’s incredibly stupid.
But I don’t exactly have time to be involved in the club scene and I feel disconnected from whatever is going on in music these days (if anything, I have the distinct feeling that “music these days” is much less of a thing… Maybe because kids these days are more into doing SJW things on tumblr than going out or buying albums.)
I’m hard pressed to claim I have a favorite song, but here are some I enjoy:
Please share some of your favorites in the comments.
Bonus question: do you think different musical genres appeal to different kinds of people outside of habit or ethnic background? (IE, obviously I’d expect Mexican singers to be more popular in Mexico and Pakistani singers to be popular in Pakistan, but do particular sorts of tunes appeal to different personalities?)
As we were discussing on Monday, as our networks have become more effective, our ability to incorporate new information may have actually gone down. Ironically, as we add more people to a group–beyond a certain limit–it becomes more difficult for individuals with particular expertise to convince everyone else in the group that the group’s majority consensus is wrong.
The difficulties large groups experience trying to coordinate and share information force them to become dominated by procedures–set rules of behavior and operation are necessary for large groups to operate. A group of three people can use ad-hoc consensus and rock-paper-scissors to make decisions; a nation of 320 million requires a complex body of laws and regulations. (I once tried to figure out just how many laws and regulations America has. The answer I found was that no one knows.)
An organization is initially founded to accomplish some purpose that benefits its founders–generally to make them well-off, but often also to produce some useful good or service. A small organization is lean, efficient, and generally exemplifies the ideals put forth in Adam Smith’s invisible hand:
It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest. We address ourselves, not to their humanity but to their self-love, and never talk to them of our necessities but of their advantages. —The Wealth Of Nations, Book I
As an organization ages and grows, its founders retire or move on, it becomes more dependent on policies and regulations and each individual employee finds his own incentives further displaced from the company’s original intentions. Soon a company is no longer devoted to either the well-being of its founders or its customers, but to the company itself. (And that’s kind of a best-case scenario in which the company doesn’t just disintegrate into individual self-interest.)
I am reminded of a story about a computer that had been programmed to play Tetris–actually, it had been programmed not to lose at Tetris. So the computer paused the game. A paused game cannot lose.
What percentage of employees (especially management) have been incentivized to win? And what percentage are being incentivized to not lose?
And no, I don’t mean that in some 80s buzzword-esque way. Most employees have more to lose (ie, their jobs) if something goes wrong as a result of their actions than to gain if something goes right. The stockholders might hope that employees are doing everything they can to maximize profits, but really, most people are trying not to mess up and get fired.
Fear of messing up goes beyond the individual scale. Whole companies are goaded by concerns about risk–“Could we get sued?” Large corporation have entire legal teams devoted to telling them how they could get sued for whatever their doing and to filing lawsuits against their competitors for whatever they’re doing.
A family in a town I visited bought an old fire station a few years ago with the intention of turning it into a Portuguese bakery and brewpub. They thought they’d have to retrofit the interior of the building to meet health and safety standards for such an establishment.
Turns out the cost of bringing the landscape around the outside of the building up to code was their primary impediment. Mandatory parking requirements, sidewalks, curb cuts, fire lanes, on-site stormwater management, handicapped accessibility, drought-tolerant native plantings…it’s a very long list that totaled $340,000 worth of work. … Guess what? They decided not to open the bakery or brewery. …
Individually it’s impossible to argue against each of the particulars. Do you really want to deprive people in wheelchairs of the basic civil right of public accommodation? Do you really want the place to catch fire and burn? Do you want a barren landscape that’s bereft of vegetation? …
I was in Hamtramck, Michigan a couple of years ago to participate in a seminar about reactivating neighborhoods through incremental small-scale development. …
While the event was underway the fire marshal happened to drive by and noticed there were people—a few dozen actual humans—occupying a commercial building in broad daylight. In a town that has seen decades of depopulation and disinvestment, this was an odd sight. And he was worried. Do people have permission for this kind of activity? Had there been an inspection? Was a permit issued? Is everything insured? He called one of his superiors to see if he should shut things down in the name of public safety.
It’s a good article. You should read the whole thing.
Back in Phillipe Bourgeois’s In Search of Respect: Selling Crack in el Barrio, Phillipe describes one drug dealer’s attempt to use the money he’d made to go into honest business by opening a convenience store. Unfortunately, he couldn’t get the store complaint with NYC disability-access regulations, and so the store never opened and the owner went back to dealing drugs. (What IQ, I wonder, is necessary to comply with all of these laws and regulations in the first place?)
Now, I’m definitely in favor of disabled people being able to buy groceries and use bathrooms. But what benefits a disabled person more: a convenience store that’s not fully wheel-chair accessible, or a crack house?
In My IRB Nightmare, Scott Alexander writes about trying to do a simple study to determine whether the screening test already being used to diagnose people with bipolar disorder is effective at diagnosing them:
When we got patients, I would give them the bipolar screening exam and record the results. Then Dr. W. would conduct a full clinical interview and formally assess them. We’d compare notes and see how often the screening test results matched Dr. W’s expert diagnosis.
Remember, they were already using the screening test on patients and then having them talk to the doctor for a formal assessment. The only thing the study added was that Scott would compare how well the screening results matched the formal assessment. No patients would be injected, subject to new procedures, or even asked different questions. They just wanted to compare two data sets.
After absurd quantities of paperwork and an approval process much too long to summarize here, the project got audited:
I kept the audit report as a souvenier. I have it in front of me now. Here’s an example infraction:
The data and safety monitoring plan consists of ‘the Principal Investigator will randomly check data integrity’. This is a prospective study with a vulnerable group (mental illness, likely to have diminished capacity, likely to be low income) and, as such, would warrant a more rigorous monitoring plan than what is stated above. In addition to the above, a more adequate plan for this study would also include review of the protocol at regular intervals, on-going checking of any participant complaints or difficulties with the study, monitoring that the approved data variables are the only ones being collected, regular study team meetings to discuss progress and any deviations or unexpected problems. Team meetings help to assure participant protections, adherence to the protocol. Having an adequate monitoring plan is a federal requirement for the approval of a study. See Regulation 45 CFR 46.111 Criteria For IRB Approval Of Research. IRB Policy: PI Qualifications And Responsibility In Conducting Research. Please revise the protocol via a protocol revision request form. Recommend that periodic meetings with the research team occur and be documented.
… Faced with submitting twenty-seven new pieces of paperwork to correct our twenty-seven infractions, Dr. W and I gave up. We shredded the patient data and the Secret Code Log. We told all the newbies they could give up and go home. … We told the IRB that they had won, fair and square; we surrendered unconditionally.
The point of all that paperwork and supervision is to make sure that no one replicates the Tuskegee Syphilis Experiment nor the Nazi anything. Noble sentiments–but as a result, a study comparing two data sets had to be canceled.
I’ve noticed recently that much of the interesting medical research is happening in the third world/China–places where the regulations aren’t as strong and experiments (of questionable ethics or not) can actually get done.
Like the computer taught not to lose at Tetris, all of these systems are more focused on minimizing risk–even non-existent risk–than on actually succeeding.
…[Yudkowsky] continues to the case of infant parenteral nutrition. Some babies have malformed digestive systems and need to have nutrient fluid pumped directly into their veins. The nutrient fluid formula used in the US has the wrong kinds of lipids in it, and about a third of babies who get it die of brain or liver damage. We’ve known for decades that the nutrient fluid formula has the wrong kind of lipids. We know the right kind of lipids and they’re incredibly cheap and there is no reason at all that we couldn’t put them in the nutrient fluid formula. We’ve done a bunch of studies showing that when babies get the right nutrient fluid formula, the 33% death rate disappears. But the only FDA-approved nutrient fluid formula is the one with the wrong lipids, so we just keep giving it to babies, and they just keep dying. Grant that the FDA is terrible and ruins everything, but over several decades of knowing about this problem and watching the dead babies pile up, shouldn’t somebody have done something to make this system work better?
The doctors have to use the FDA-approved formula or they could get sued for malpractice. The insurance companies, of course, only cover the FDA-approved formula. The formula makers are already making money selling the current formula and would probably have to go through an expensive, multi-year review system (with experiments far more regulated than Scott’s) to get the new formula approved, and even then they might not actually get approval. In short, on one side are people in official positions of power whose lives could be made worse (or less convenient) if they tried to fix the problem, and on the other side are dead babies who can’t stand up for themselves.
Communism strikes me as the ultimate expression of this beast: a society fully transformed into a malevolent AI. It’s impossible to determine exactly how many people were murdered by communism, but the Black Book of Communism estimates a death toll between 85 and 100 million people.
Capitalism, for all its faults, is at least somewhat decentralized. If you make a bad business decision, you suffer the consequences and can hopefully learn from your mistakes and make better decisions in the future. But in communist systems, one central planner’s bad decisions can cause suffering for millions of other people, resulting in mass death. Meanwhile, the central planner may suffer for correcting the bad decision. Centralized economies simply lack the feedback loops necessary to fix problems before they start killing people.
While FDA oversight of medicines is probably important, would it be such a bad thing if a slightly freer market in parenteral nutrition allowed parents to chose between competing brands of formula, each promising not to kill your baby?
There’s an interesting post mortem on the rise and fall of the clickbait liberalism site Mic.com, that attracted an alleged 65 million unique visitors on the strength of Woketastic personal stories like “5 Powerful Reasons I’m a (Male) Feminist,” …
Every time Mic had a hit, it would distill that success into a formula and then replicate it until it was dead. Successful “frameworks,” or headlines, that went through this process included “Science Proves TK,” “In One Perfect Tweet TK,” “TK Reveals the One Brutal Truth About TK,” and “TK Celebrity Just Said TK Thing About TK Issue. Here’s why that’s important.” At one point, according to an early staffer who has since left, news writers had to follow a formula with bolded sections, which ensured their stories didn’t leave readers with any questions: The intro. The problem. The context. The takeaway.
…But the success of Mic.com was due to algorithms built on top of algorithms. Facebook targets which links are visible to users based on complex and opaque rules, so it wasn’t just the character of the 2010s American population that was receptive to Mic.com’s specific brand of SJW outrage clickbait, but Facebook’s rules for which articles to share with which users and when. These rules, in turn, are calibrated to keep users engaged in Facebook as much as possible and provide the largest and most receptive audience for its advertisers, as befits a modern tech giant in a two-sided market.
The ideal Head Girl is an all-rounder: performs extremely well in all school subjects and has a very high Grade Point Average. She is excellent at sports, Captaining all the major teams. She is also pretty, popular, sociable and well-behaved.
The Head Girl will probably be a big success in life, in whatever terms being a big success happens to be framed …
But the Head Girl is not, cannot be, a creative genius. …
The more selective the social system, the more it will tend to privilege the Head Girl and eliminate the creative genius.
Committees, peer review processes, voting – anything which requires interpersonal agreement and consensus – will favour the Head Girl and exclude the creative genius. …
We live in a Head Girl’s world – which is also a world where creative genius is marginalized and disempowered to the point of near-complete invisibility.
The quest for social status is, I suspect, one of the things driving the system. Status-oriented people refuse to accept information that comes from people lower status than themselves, which renders system feedback even more difficult. The internet as a medium of information sharing is beautiful; the internet as a medium of status signalling is horrible.
So what do you think? Do sufficiently large organization start acting like malevolent (or hostile) AIs?
AI typically refers to any kind of intelligence or ability to learn possessed by machines. Malevolent AI occurs when a machine pursues its programmed objectives in a way that humans find horrifying or immoral. For example, a machine programmed to make paperclips might decide that the easiest way to maximize paperclip production is to enslave humans to make paperclips for it. Superintelligent AI is AI that has figured out how to make itself smarter and thus keeps getting smarter and smarter. (Should we develop malevolent superintelligent AI, then we’ll really have something to worry about.)
Note: people who actually study AI probably have better definitions than I do.
While we like to think of ourselves (humans) as unique, thinking individuals, it’s clear that many of our ideas come from other people. Chances are good you didn’t think up washing your hands or brushing your teeth by yourself, but learned about them from your parents. Society puts quite a bit of effort, collectively speaking, into teaching children all of the things people have learned over the centuries–from heliocentrism to the fact that bleeding patients generally makes diseases worse, not better.
Just as we cannot understand the behavior of ants or bees simply by examining the anatomy of a single ant or single bee, but must look at the collective life of the entire colony/hive, so we cannot understand human behavior by merely examining a single human, but must look at the collective nature of human societies. “Man is a political animal,” whereby Aristotle did not mean that we are inherently inclined to fight over transgender bathrooms, but instinctively social:
Hence it is evident that the state is a creation of nature, and that man is by nature a political animal. And he who by nature and not by mere accident is without a state, is either above humanity, or below it; he is the ‘Tribeless, lawless, hearthless one,’ whom Homer denounces—the outcast who is a lover of war; he may be compared to a bird which flies alone.
Now the reason why man is more of a political animal than bees or any other gregarious animals is evident. Nature, as we often say, makes nothing in vain, and man is the only animal whom she has endowed with the gift of speech. And whereas mere sound is but an indication of pleasure or pain, and is therefore found in other animals (for their nature attains to the perception of pleasure and pain and the intimation of them to one another, and no further), the power of speech is intended to set forth the expedient and inexpedient, and likewise the just and the unjust. And it is a characteristic of man that he alone has any sense of good and evil, of just and unjust, and the association of living beings who have this sense makes a family and a state. –Aristotle, Politics
With very rare exceptions, humans–all humans, in all parts of the world–live in groups. Tribes. Families. Cities. Nations. Our nearest primate relatives, chimps and bonobos, also live in groups. Primates are social, and their behavior can only be understood in the context of their groups.
Groups of humans are able to operate in ways that individual humans cannot, drawing on the collective memories, skills, and knowledge of their members to create effects much greater than what could be achieved by each person acting alone. For example, one lone hunter might be able to kill a deer–or if he is extremely skilled, hardworking, and lucky, a dozen deer–but ten hunters working together can drive an entire herd of deer over a cliff, killing hundreds or even thousands. (You may balk at the idea, but many traditional hunting societies were dependent on only a few major hunts of migrating animals to provide the majority of their food for the entire year–meaning that those few hunts had to involve very high numbers of kills or else the entire tribe would starve while waiting for the animals to return.)
Chimps have never, to my knowledge, driven megafauna to extinction–but humans have a habit of doing so wherever they go. Humans are great at what we do, even if we aren’t always great at extrapolating long-term trends.
But the beneficial effects of human cooperation don’t necessarily continue to increase as groups grow larger–China’s 1.3 billion people don’t appear to have better lives than Iceland’s 332,000 people. Indeed, there probably is some optimal size–depending on activity and available communications technology–beyond which the group struggles to coordinate effectively and begins to degenerate.
The trope that the likelihood of an accurate group decision increases with the abundance of brains involved might not hold up when a collective faces a variety of factors — as often happens in life and nature. Instead, Princeton University researchers report that smaller groups actually tend to make more accurate decisions, while larger assemblies may become excessively focused on only certain pieces of information. …
collective decision-making has rarely been tested under complex, “realistic” circumstances where information comes from multiple sources, the Princeton researchers report in the journal Proceedings of the Royal Society B. In these scenarios, crowd wisdom peaks early then becomes less accurate as more individuals become involved, explained senior author Iain Couzin, a professor of ecology and evolutionary biology. …
The researchers found that the communal ability to pool both pieces of information into a correct, or accurate, decision was highest in a band of five to 20. After that, the accurate decision increasingly eluded the expanding group.
Couzin found that in small groups, people with specialized knowledge could effectively communicate that to the rest of the group, whereas in larger groups, they simply couldn’t convey their knowledge to enough people and group decision-making became dominated by the things everyone knew.
If you could travel back in time and propose the idea of democracy to the inhabitants of 13th century England, they’d respond with incredulity: how could peasants in far-flung corners of the kingdom find out who was running for office? Who would count the votes? How many months would it take to tally up the results, determine who won, and get the news back to the outlying provinces? If you have a printing press, news–and speeches–can quickly and accurately spread across large distances and to large numbers of people, but prior to the press, large-scale democracy simply wasn’t practical.
Likewise, the communism of 1917 probably couldn’t have been enacted in 1776, simply because society at that time didn’t have the technology yet to gather all of the necessary data on crop production, factory output, etc. (As it was, neither did Russia of 1917, but they were closer.)
Today, the amount of information we can gather and share on a daily basis is astounding. I have at my fingertips the world’s greatest collection of human knowledge, an overwhelming torrent of data.
All of our these information networks have linked society together into an increasingly efficient meta-brain–unfortunately, it’s not a very smart meta-brain. Like the participants in Couzin’s experiments, we are limited to what “everyone knows,” stymied in our efforts to impart more specialized knowledge. (I don’t know about you, but I find being shouted down by a legion of angry people who know less about a subject than I do one of the particularly annoying features of the internet.)
For example, there’s been a lot of debate lately about immigration, but how much do any of us really know about immigrants or immigrant communities? How much of this debate is informed by actual knowledge of the people involved, and how much is just people trying to extend vague moral principles to cover novel situations? I recently had a conversation with a progressive acquaintance who justified mass-immigration on the grounds that she has friendly conversations with the cabbies in her city. Heavens protect us–I hope to get along with people as friends and neighbors, not just when I am paying them!
One gets the impression in conversation with Progressives that they regard Christian Conservatives as a real threat, because that group that can throw its weight around in elections or generally enforce cultural norms that liberals don’t like, but are completely oblivious to the immigrants’ beliefs. Most of our immigrants hail from countries that are rather more conservative than the US and definitely more conservative than our liberals.
Any sufficiently intelligent democracy ought to be able to think critically about the political opinions of the new voters it is awarding citizenship to, but we struggle with this. My Progressive acquaintance seems think that we can import an immense, conservative, third-world underclass and it will stay servile indefinitely, not vote its own interests or have any effects on social norms. (Or its interests will be, coincidentally, hers.)
This is largely an information problem–most Americans are familiar with our particular brand of Christian conservatives, but are unfamiliar with Mexican or Islamic ones.
How many Americans have intimate, detailed knowledge of any Islamic society? Very few of us who are not Muslim ourselves speak Arabic, and few Muslim countries are major tourist destinations. Aside from the immigrants themselves, soldiers, oil company employees, and a handful of others have spent time in Islamic countries, but that’s about it–and no one is making any particular effort to listen to their opinions. (It’s a bit sobering to realize that I know more about Islamic culture than 90% of Americans and I still don’t really know anything.)
So instead of making immigration policy based on actual knowledge of the groups involved, people try to extend the moral rules–heuristics–they already have. So people who believe that “religious tolerance is good,” because this rule has generally been useful in preventing conflict between American religious groups, think this rule should include Muslim immigrants. People who believe, “I like being around Christians,” also want to apply their rule. (And some people believe, “Groups are more oppressive when they’re the majority, so I want to re-structure society so we don’t have a majority,” and use that rule to welcome new immigrants.)
And we are really bad at testing whether or not our rules are continuing to be useful in these new situations.
Ironically, as our networks have become more effective, our ability to incorporate new information may have actually gone down.
The difficulties large groups experience trying to coordinate and share information force them to become dominated by procedures–set rules of behavior and operation are necessary for large groups to operate. A group of three people can use ad-hoc consensus and rock-paper-scissors to make decisions; a nation of 320 million requires a complex body of laws and regulations.
The other day on Twitter, Nick B. Steves challenged me to find data supporting or refuting his assertion that Nerds vs. Jocks is a false stereotype, invented around 1975. Of course, we HBDers have a saying–“all stereotypes are true,” even the ones about us–but let’s investigate Nick’s claim and see where it leads us.
(NOTE: If you have relevant data, I’d love to see it.)
Unfortunately, terms like “nerd,” “jock,” and “chad” are not all that well defined. Certainly if we define “jock” as “athletic but not smart” and nerd as “smart but not athletic,” then these are clearly separate categories. But what if there’s a much bigger group of people who are smart and athletic?
Or what if we are defining “nerd” and “jock” too narrowly? Wikipedia defines nerd as, “a person seen as overly intellectual, obsessive, or lacking social skills.” I recall a study–which I cannot find right now–which found that nerds had, overall, lower-than-average IQs, but that study included people who were obsessive about things like comic books, not just people who majored in STEM. Similarly, should we define “jock” only as people who are good at sports, or do passionate sports fans count?
For the sake of this post, I will define “nerd” as “people with high math/science abilities” and “jock” as “people with high athletic abilities,” leaving the matter of social skills undefined. (People who merely like video games or watch sports, therefore, do not count.)
Nick is correct on one count: according to Wikipedia, although the word “nerd” has been around since 1951, it was popularized during the 70s by the sitcom Happy Days. However, Wikipedia also notes that:
An alternate spelling, as nurd or gnurd, also began to appear in the mid-1960s or early 1970s. Author Philip K. Dick claimed to have coined the nurd spelling in 1973, but its first recorded use appeared in a 1965 student publication at Rensselaer Polytechnic Institute.Oral tradition there holds that the word is derived from knurd (drunk spelled backward), which was used to describe people who studied rather than partied. The term gnurd (spelled with the “g”) was in use at the Massachusetts Institute of Technology by 1965. The term nurd was also in use at the Massachusetts Institute of Technology as early as 1971 but was used in the context for the proper name of a fictional character in a satirical “news” article.
suggesting that the word was already common among nerds themselves before it was picked up by TV.
Terman’s goal was to disprove the then-current belief that gifted children were sickly, socially inept, and not well-rounded.
This belief was especially popular in a little nation known as Germany, where it inspired people to take schoolchildren on long hikes in the woods to keep them fit and the mass-extermination of Jews, who were believed to be muddying the German genepool with their weak, sickly, high-IQ genes (and nefariously trying to marry strong, healthy German in order to replenish their own defective stock.) It didn’t help that German Jews were both high-IQ and beset by a number of illnesses (probably related to high rates of consanguinity,) but then again, the Gypsies are beset by even more debilitating illnesses, but no one blames this on all of the fresh air and exercise afforded by their highly mobile lifestyles.
(Just to be thorough, though, the Nazis also exterminated the Gypsies and Hans Asperger’s subjects, despite Asperger’s insistence that they were very clever children who could probably be of great use to the German war effort via code breaking and the like.)
The results of Terman’s study are strongly in Nick’s favor. According to Psychology Today’s account:
His final group of “Termites” averaged a whopping IQ of 151. Following-up his group 35-years later, his gifted group at mid-life definitely seemed to conform to his expectations. They were taller, healthier, physically better developed, and socially adept (dispelling the myth at the time of high-IQ awkward nerds).
…the first volume of the study reported data on the children’s family, educational progress, special abilities, interests, play, and personality. He also examined the children’s racial and ethnic heritage. Terman was a proponent of eugenics, although not as radical as many of his contemporary social Darwinists, and believed that intelligence testing could be used as a positive tool to shape society.
Based on data collected in 1921–22, Terman concluded that gifted children suffered no more health problems than normal for their age, save a little more myopia than average. He also found that the children were usually social, were well-adjusted, did better in school, and were even taller than average. A follow-up performed in 1923–1924 found that the children had maintained their high IQs and were still above average overall as a group.
Of course, we can go back even further than Terman–in the early 1800s, allergies like hay fever were associated with the nobility, who of course did not do much vigorous work in the fields.
My impression, based on studies I’ve seen previously, is that athleticism and IQ are positively correlated. That is, smarter people tend to be more athletic, and more athletic people tend to be smarter. There’s a very obvious reason for this: our brains are part of our bodies, people with healthier bodies therefore also have healthier brains, and healthier brains tend to work better.
At the very bottom of the IQ distribution, mentally retarded people tend to also be clumsy, flacid, or lacking good muscle tone. The same genes (or environmental conditions) that make children have terrible health/developmental problems often also affect their brain growth, and conditions that affect their brains also affect their bodies. As we progress from low to average to above-average IQ, we encounter increasingly healthy people.
In most smart people, high-IQ doesn’t seem to be a random fluke, a genetic error, nor fitness reducing: in a genetic study of children with exceptionally high IQs, researchers failed to find many genes that specifically endowed the children with genius, but found instead a fortuitous absence of deleterious genes that knock a few points off the rest of us. The same genes that have a negative effect on the nerves and proteins in your brain probably also have a deleterious effect on the nerves and proteins throughout the rest of your body.
Controlling for age, physical maturity, and mother’s education, a significant curvilinear relationship between intelligence and coital status was demonstrated; adolescents at the upper and lower ends of the intelligence distribution were less likely to have sex. Higher intelligence was also associated with postponement of the initiation of the full range of partnered sexual activities. … Higher intelligence operates as a protective factor against early sexual activity during adolescence, and lower intelligence, to a point, is a risk factor.
Here we see the issue plainly: males at 120 and 130 IQ are less likely to get laid than clinically retarded men in 70s and 60s. The right side of the graph are “nerds”, the left side, “jocks.” Of course, the high-IQ females are even less likely to get laid than the high-IQ males, but males tend to judge themselves against other men, not women, when it comes to dating success. Since the low-IQ females are much less likely to get laid than the low-IQ males, this implies that most of these “popular” guys are dating girls who are smarter than themselves–a fact not lost on the nerds, who would also like to date those girls.
In 2001, the MIT/Wellesley magazine Counterpart (Wellesley is MIT’s “sister school” and the two campuses allow cross-enrollment in each other’s courses) published a sex survey that provides a more detailed picture of nerd virginity:
I’m guessing that computer scientists invented polyamory, and neuroscientists are the chads of STEM. The results are otherwise pretty predictable.
Unfortunately, Counterpoint appears to be defunct due to lack of funding/interest and I can no longer find the original survey, but here is Jason Malloy’s summary from Gene Expression:
By the age of 19, 80% of US males and 75% of women have lost their virginity, and 87% of college students have had sex. But this number appears to be much lower at elite (i.e. more intelligent) colleges. According to the article, only 56% of Princeton undergraduates have had intercourse. At Harvard 59% of the undergraduates are non-virgins, and at MIT, only a slight majority, 51%, have had intercourse. Further, only 65% of MIT graduate students have had sex.
The student surveys at MIT and Wellesley also compared virginity by academic major. The chart for Wellesley displayed below shows that 0% of studio art majors were virgins, but 72% of biology majors were virgins, and 83% of biochem and math majors were virgins! Similarly, at MIT 20% of ‘humanities’ majors were virgins, but 73% of biology majors. (Apparently those most likely to read Darwin are also the least Darwinian!)
How Rolling Stone-ish are the few lucky souls who are doing the horizontal mambo? Well, not very. Considering all the non-virgins on campus, 41% of Wellesley and 32% of MIT students have only had one partner (figure 5). It seems that many Wellesley and MIT students are comfortingly monogamous. Only 9% of those who have gotten it on at MIT have been with more than 10 people and the number is 7% at Wellesley.
Someone needs to find the original study and PUT IT BACK ON THE INTERNET.
But this lack of early sexual success seems to translate into long-term marital happiness, once nerds find “the one.”Lex Fridman’s Divorce Rates by Profession offers a thorough list. The average divorce rate was 16.35%, with a high of 43% (Dancers) and a low of 0% (“Media and communication equipment workers.”)
I’m not sure exactly what all of these jobs are nor exactly which ones should count as STEM (veterinarian? anthropologists?) nor do I know how many people are employed in each field, but I count 49 STEM professions that have lower than average divorce rates (including computer scientists, economists, mathematical science, statisticians, engineers, biologists, chemists, aerospace engineers, astronomers and physicists, physicians, and nuclear engineers,) and only 23 with higher than average divorce rates (including electricians, water treatment plant operators, radio and telecommunication installers, broadcast engineers, and similar professions.) The purer sciences obviously had lower rates than the more practical applied tech fields.
The big outliers were mathematicians (19.15%), psychologists (19.26%), and sociologists (23.53%), though I’m not sure they count (if so, there were only 22 professions with higher than average divorce rates.)
I’m not sure which professions count as “jock” or “chad,” but athletes had lower than average rates of divorce (14.05%) as did firefighters, soldiers, and farmers. Financial examiners, hunters, and dancers, (presumably an athletic female occupation) however, had very high rates of divorce.
According to the survey recently taken by the “infidelity dating website,” Victoria Milan, individuals working in the finance field, such as brokers, bankers, and analysts, are more likely to cheat than those in any other profession. However, following those in finance comes those in the aviation field, healthcare, business, and sports.
With the exception of healthcare and maybe aviation, these are pretty typical Chad occupations, not STEM.
The Mirror has a similar list of jobs where people are most and least likely to be married. Most likely: Dentist, Chief Executive, Sales Engineer, Physician, Podiatrist, Optometrist, Farm product buyer, Precision grinder, Religious worker, Tool and die maker.
Least likely: Paper-hanger, Drilling machine operator, Knitter textile operator, Forge operator, Mail handler, Science technician, Practical nurse, Social welfare clerk, Winding machine operative, Postal clerk.
I struggled to find data on male fertility by profession/education/IQ, but there’s plenty on female fertility, eg the deceptively titled High-Fliers have more Babies:
…American women without any form of high-school diploma have a fertility rate of 2.24 children. Among women with a high-school diploma the fertility rate falls to 2.09 and for women with some form of college education it drops to 1.78.
However, among women with college degrees, the economists found the fertility rate rises to 1.88 and among women with advanced degrees to 1.96. In 1980 women who had studied for 16 years or more had a fertility rate of just 1.2.
As the economists prosaically explain: “The relationship between fertility and women’s education in the US has recently become U-shaped.”
Here is another article about the difference in fertility rates between high and low-IQ women.
But female fertility and male fertility may not be the same–I recall data elsewhere indicating that high-IQ men have more children than low IQ men, which implies those men are having their children with low-IQ women. (For example, while Bill and Hillary seem about matched on IQ, and have only one child, Melania Trump does not seem as intelligent as Trump, who has five children.)
Of the 1,508,874 children born in 1920 in the birth registration area of the United states, occupations of fathers are stated for … 96.9%… The average number of children ever born to the present wives of these occupied fathers is 3.3 and the average number of children living 2.9.
The average number of children ever born ranges from 4.6 for foremen, overseers, and inspectors engaged in the extraction of minerals to 1.8 for soldiers, sailors, and marines. Both of these extreme averages are easily explained, for soldier, sailors and marines are usually young, while such foremen, overseers, and inspectors are usually in middle life. For many occupations, however, the ages of the fathers are presumably about the same and differences shown indicate real differences in the size of families. For example, the low figure for dentists, (2), architects, (2.1), and artists, sculptors, and teachers of art (2.2) are in striking contrast with the figure for mine operatives (4.3), quarry operatives (4.1) bootblacks, and brick and stone masons (each 3.9). …
As a rule the occupations credited with the highest number of children born are also credited with the highest number of children living, the highest number of children living appearing for foremen, overseers, and inspectors engaged in the extraction of minerals (3.9) and for steam and street railroad foremen and overseer (3.8), while if we exclude groups plainly affected by the age of fathers, the highest number of children living appear for mine and quarry operatives (each 3.6).
Obviously the job market was very different in 1920–no one was majoring in computer science. Perhaps some of those folks who became mine and quarry operatives back then would become engineers today–or perhaps not. Here are the average numbers of surviving children for the most obviously STEM professions (remember average for 1920 was 2.9):
The Journal-Constitution studied 54 public universities, “including the members of the six major Bowl Championship Series conferences and other schools whose teams finished the 2007-08 season ranked among the football or men’s basketball top 25.”…
Football players average 220 points lower on the SAT than their classmates. Men’s basketball was 227 points lower.
University of Florida won the prize for biggest gap between football players and the student body, with players scoring 346 points lower than their peers.
Georgia Tech had the nation’s best average SAT score for football players, 1028 of a possible 1600, and best average high school GPA, 3.39 of a possible 4.0. But because its student body is apparently very smart, Tech’s football players still scored 315 SAT points lower than their classmates.
UCLA, which has won more NCAA championships in all sports than any other school, had the biggest gap between the average SAT scores of athletes in all sports and its overall student body, at 247 points.
From the original article, which no longer seems to be up on the Journal-Constitution website:
All 53 schools for which football SAT scores were available had at least an 88-point gap between team members’ average score and the average for the student body. …
Football players performed 115 points worse on the SAT than male athletes in other sports.
The differences between athletes’ and non-athletes’ SAT scores were less than half as big for women (73 points) as for men (170).
Many schools routinely used a special admissions process to admit athletes who did not meet the normal entrance requirements. … At Georgia, for instance, 73.5 percent of athletes were special admits compared with 6.6 percent of the student body as a whole.
On the other hand, as Discover Magazine discusses in “The Brain: Why Athletes are Geniuses,” athletic tasks–like catching a fly ball or slapping a hockey puck–require exceptionally fast and accurate brain signals to trigger the correct muscle movements.
Ryan Stegal studied the GPAs of highschool student athletes vs. non-athletes and found that the athletes had higher average GPAs than the non-athletes, but he also notes that the athletes were required to meet certain minimum GPA requirements in order to play.
But within athletics, it looks like the smarter athletes perform better than dumber ones, which is why the NFL uses the Wonderlic Intelligence Test:
NFL draft picks have taken the Wonderlic test for years because team owners need to know if their million dollar player has the cognitive skills to be a star on the field.
What does the NFL know about hiring that most companies don’t? They know that regardless of the position, proof of intelligence plays a profound role in the success of every individual on the team. It’s not enough to have physical ability. The coaches understand that players have to be smart and think quickly to succeed on the field, and the closer they are to the ball the smarter they need to be. That’s why, every potential draft pick takes the Wonderlic Personnel Test at the combine to prove he does–or doesn’t—have the brains to win the game. …
The first use of the WPT in the NFL was by Tom Landry of the Dallas Cowboys in the early 70s, who took a scientific approach to finding players. He believed players who could use their minds where it counted had a strategic advantage over the other teams. He was right, and the test has been used at the combine ever since.
For the NFL, years of testing shows that the higher a player scores on the Wonderlic, the more likely he is to be in the starting lineup—for any position. “There is no other reasonable explanation for the difference in test scores between starting players and those that sit on the bench,” Callans says. “Intelligence plays a role in how well they play the game.”
A large study conducted at the Sahlgrenska Academy and Sahlgrenska University Hospital in Gothenburg, Sweden, reveals that young adults who regularly exercise have higher IQ scores and are more likely to go on to university.
The study was published in the Proceedings of the National Academy of Sciences (PNAS), and involved more than 1.2 million Swedish men. The men were performing military service and were born between the years 1950 and 1976. Both their physical and IQ test scores were reviewed by the research team. …
The researchers also looked at data for twins and determined that primarily environmental factors are responsible for the association between IQ and fitness, and not genetic makeup. “We have also shown that those youngsters who improve their physical fitness between the ages of 15 and 18 increase their cognitive performance.”…
I have seen similar studies before, some involving mice and some, IIRC, the elderly. It appears that exercise is probably good for you.
I have a few more studies I’d like to mention quickly before moving on to discussion.
Overall, it looks like smarter people are more athletic, more athletic people are smarter, smarter athletes are better athletes, and exercise may make you smarter. For most people, the nerd/jock dichotomy is wrong.
However, there is very little overlap at the very highest end of the athletic and intelligence curves–most college (and thus professional) athletes are less intelligent than the average college student, and most college students are less athletic than the average college (and professional) athlete.
Additionally, while people with STEM degrees make excellent spouses (except for mathematicians, apparently,) their reproductive success is below average: they have sex later than their peers and, as far as the data I’ve been able to find shows, have fewer children.
Even if there is a large overlap between smart people and athletes, they are still separate categories selecting for different things: a cripple can still be a genius, but can’t play football; a dumb person can play sports, but not do well at math. Stephen Hawking can barely move, but he’s still one of the smartest people in the world. So the set of all smart people will always include more “stereotypical nerds” than the set of all athletes, and the set of all athletes will always include more “stereotypical jocks” than the set of all smart people.
In my experience, nerds aren’t socially awkward (aside from their shyness around women.) The myth that they are stems from the fact that they have different interests and communicate in a different way than non-nerds. Let nerds talk to other nerds, and they are perfectly normal, communicative, socially functional people. Put them in a room full of non-nerds, and suddenly the nerds are “awkward.”
Unfortunately, the vast majority of people are not nerds, so many nerds have to spend the majority of their time in the company of lots of people who are very different than themselves. By contrast, very few people of normal IQ and interests ever have to spend time surrounded by the very small population of nerds. If you did put them in a room full of nerds, however, you’d find that suddenly they don’t fit in. The perception that nerds are socially awkward is therefore just normie bias.
Why did the nerd/jock dichotomy become so popular in the 70s? Probably in part because science and technology were really taking off as fields normal people could aspire to major in, man had just landed on the moon and the Intel 4004 was released in 1971. Very few people went to college or were employed in sciences back in 1920; by 1970, colleges were everywhere and science was booming.
And at the same time, colleges and highschools were ramping up their athletics programs. I’d wager that the average school in the 1800s had neither PE nor athletics of any sort. To find those, you’d probably have to attend private academies like Andover or Exeter. By the 70s, though, schools were taking their athletics programs–even athletic recruitment–seriously.
How strong you felt the dichotomy probably depends on the nature of your school. I have attended schools where all of the students were fairly smart and there was no anti-nerd sentiment, and I have attended schools where my classmates were fiercely anti-nerd and made sure I knew it.
But the dichotomy predates the terminology. Take Superman, first 1938. His disguise is a pair of glasses, because no one can believe that the bookish, mild-mannered, Clark Kent is actually the super-strong Superman. Batman is based on the character of El Zorro, created in 1919. Zorro is an effete, weak, foppish nobleman by day and a dashing, sword-fighting hero of the poor by night. Of course these characters are both smart and athletic, but their disguises only work because others do not expect them to be. As fantasies, the characters are powerful because they provide a vehicle for our own desires: for our everyday normal failings to be just a cover for how secretly amazing we are.
But for the most part, most smart people are perfectly fit, healthy, and coordinated–even the ones who like math.