“3 in 4 former prisoners in 30 states arrested within 5 years of release” (from the Bureau of Justice Statistics press release, April 22, 2014.)Inspired by my recent musings, I thought I would refresh my memory on recidivism stats–I have a vague memory that murderers tend not to recidivate, (murderers tend to stay in prison for a very long time) and that car jackers do, but it’s a bad idea to make claims based on vague memories of old data.
So here’s what the press release has to say:
“An estimated two-thirds (68 percent) of 405,000 prisoners released in 30 states in 2005 were arrested for a new crime within three years of release from prison, and three-quarters (77 percent) were arrested within five years…
More than a third (37 percent) of prisoners who were arrested within five years of release were arrested within the first six months after release, with more than half (57 percent) arrested by the end of the first year.”
We could probably save some time and effort if we could effectively identify those third before releasing them. HOWEVER, I don’t know what percent of these people are being re-arrested on parole violations that the rest of us might not really consider “crimes”, like missing a meeting with one’s parole officer or forgetting to register one’s address.
“Recidivism rates varied with the attributes of the inmate. Prisoners released after serving time for a property offense were the most likely to recidivate. Within five years of release, 82 percent of property offenders were arrested for a new crime, compared to 77 percent of drug offenders, 74 percent of public order offenders and 71 percent of violent offenders.”
I’m guessing violent offenders spent longer in prison, and thus were older when released.
“Recidivism was highest among males, blacks and young adults. By the end of the fifth year after release, more than three-quarters (78 percent) of males and two-thirds (68 percent) of females were arrested, a 10 percentage point difference that remained relatively stable during the entire 5-year follow-up period.
Five years after release from prison, black offenders had the highest recidivism rate (81 percent), compared to Hispanic (75 percent) and white (73 percent) offenders.”
So, while while the chances of being a criminal vary widely between groups, criminals from all the groups recidivate at fairly similar rates. This suggests that we are probably actually arresting the subset of people who are criminals most of the time.
“Within five years of release, 61 percent of released inmates with four or fewer arrests in their prior criminal history were arrested, compared to 86 percent of those who had 10 or more prior arrests.”
Maybe guys with 10 prior arrests shouldn’t be released until they’re well over 40?
Some finer grain on recidivism by specific crime, after five years (note: this does not tell us the new offense,) from the PDF:
Looks like my vague memories were correct. Murderers are the least likely to recidivate, probably due to the personal nature of many murders (you’ve got to really hate that guy,) and murderers being older when released, but they are still folks who aren’t great at solving inter-personal problems or running their lives. Rapist probably figure out non-illegal ways to have sex, or else get old enough to be less interested in it. Drunks probably learn to call a cab when drunk.
Relatively speaking, of course. A 50 or 60% recidivism rate still isn’t something that inspires great confidence. To be clear, again, this is not data on how many released murderers commit another murder or how many released rapists commit another rape–this is arrest for any crime. A further breakdown of re-arrest by new crime vs. old crime would be interesting.Carjacking, by contrast, looks like the Xtreme sports of crime–people attracted to this form of violent thrill-seeking seem unlikely to change their spots or find more legal alternatives.
From the abstract: The role of parenting in the development of criminal behavior has been the source of a vast amount of research, with the majority of studies detecting statistically significant associations between dimensions of parenting and measures of criminal involvement. An emerging group of scholars, however, has drawn attention to the methodological limitations-mainly genetic confounding-of the parental socialization literature. The current study addressed this limitation by analyzing a sample of adoptees to assess the association between 8 parenting measures and 4 criminal justice outcome measures. The results revealed very little evidence of parental socialization effects on criminal behavior before controlling for genetic confounding and no evidence of parental socialization effects on criminal involvement after controlling for genetic confounding.
In other words, looks like my basic thesis is holding up. Overall, I suspect it is far easier to fuck up a kid so they don’t meet their full potential (say, by abusing/neglecting) than to get rid of the effects of negative traits. It’s probably best to try to work with people’s inclinations by finding them life-paths that work for them, rather than trying to mold them into something they aren’t.
“Divorce is not a new thing, people have been getting divorces in this part of the world for centuries. The truth is that marriage was not necessarily about love, but wait this is not a bad thing, marriage was a contract in which both the husband and the wife would receive mutual benefits. In addition, women married families, not just the man. If the wife was not gaining her benefits, why should she stay in the marriage? Some of us are the grand- or great-grand daughters of women who divorced several times. It was not a taboo and was not treated as something shameful. Apparently no woman getting married believed that it would last a lifetime. Women left their husbands under various pretexts and returned to their parents’ home leaving children with the husband’s family, they would frequently return to continue playing a role in their children’s lives. Women could have several husbands in their lifetime not unlike men who married multiple women.”
“I have noted that the most popular women in Yoruba history who are still remembered today are thought to have never married or had children (starting with Efunsetan). When women divorced, sometimes they would leave their children with their husbands’ families, so blended families always existed too. And there were several reasons people did not marry, sometimes not by choice, for example certain priests/priestesses never married because they were already married to the dieties they worshipped.”
” I can’t speak authoritatively for every society, but in parts of Yorubaland this love of kids was not limited to ones biological children. It’s interesting that people would say Africans in the past loved kids, but would limit this to biological children. Have we all not heard of the “it takes a whole village to raise a child” thing? Marriage was never for procreation because children were seen as communal. I have learned that adoption was not uncommon among some Yoruba of the past (and in fact among other ethnic groups, remember King Ahebi’s most beloved son was adopted). Usually temporary unlike the Western adoption model today, it was normal for children to live away from their parents. My own parents did not grow up with their parents but with relatives. It was common back in the day to send children to a place where they could learn a trade and work as an apprentice. Basically everyone took care of children.”
“I think a lot of us tend to be ashamed of polygamy when referring to the past but look at it this way; the polygamy of the past existed because people needed to make a living. Again marriage was mutually beneficial. In places where land was usually owned by men, wives would work on land, farm and sell their produce in order to make money for themselves.”
From Slavery Site: “Nigeria is Africa’s most populous country, with a population of 149,229,090. It is bordered on the coast by Benin to the west and Cameroon to the east. Lagos was originally settled by the Yorubas, and is now the largest city in Nigeria (8-10 million population) and one of the largest in Africa, second to Cairo in urban area population. Located on the Slave Coast, it was a major center of the slave trade from 1704 to 1851.”
From Protecting Children from Abuse and Neglect: Trend and Issues (discussing the CA foster care system):
“Foster Care Population by Race/Ethnicity. As shown in Figure 10, African–American and Native American children make up a disproportionately high amount of the foster care population relative to their share of the total state population (for those ages 20 or younger). The rates of African–Americans in foster care are four times that of the rates of African–Americans in the state’s total population, [bold mine] and similar disproportionality exists for Native Americans. Conversely, there are lower rates of Whites, Latinos, and Asian/Pacific Islanders in the foster care population as there are in the state’s total population. Most notably, Asian/Pacific Islanders make up approximately 11 percent of the state population but only 2.5 percent of the foster care population. [Me: Even though a lot of these folks were Vietnamese refugees who’ve had it pretty damn hard in life.]
“Foster Care Outcomes by Race/Ethnicity. There are also differences in foster care outcomes when comparing one race/ethnicity to another, some of which are displayed in Figure 11. As shown in the figure, African–American and Native American children are significantly more likely to be the subject of a substantiated maltreatment report and enter foster care as compared to White, Latino, or Asian/Pacific Islander children. … African–American and Native American children are also less likely to reunify with their families than White, Latino, and Asian/Pacific Islander children. Further, African–American children have less stability in their foster care placements on average than children of all other races/ethnicities.”
ChildStats.gov states, “Seventy-four percent of White, non-Hispanic, 59 percent of Hispanic, and 33 percent of Black children lived with two married parents in 2012.” That leaves 77% of black kids living with one parent or no parents; 77-55= 22% of black kids living with no parents. A large% of those kids live with grandparents, aunts, or other relatives, but a lot are in foster care.
Conservatives like to claim that if black people would just form two-parent families and raise their kids together, black poverty, incarceration, drug use, low SAT scores, etc., would all disappear.
While a bit of stability might help, (or might not, since males commit the vast majority of violence, so you might just trade neglect for physical abuse,) conservatives are probably on the wrong general track.
The quotes at the top of the post, about the Yoruba, are the sort of thing you might read in your anthropology class and come away with the idea that before evil white people showed up, the rest of the world was full of wonderful gender egalitarians who had lovely, enlightened views about childrearing. Even the title of presidential hopeful Hillary Clinton’s book, “It Takes a Village to Raise a Child,” is supposed to come from an African proverb on child rearing. There’s some controversy over whether or not it is an actual proverb, or just loosely based on one of the many very similar African proverbs, eg, “A child does not grow up only in a single home,” and, “A child belongs not to one parent or home.” (from the Wikipedia page on the book.)
Various conservatives have responded, “No, it takes a family to raise a child,” just showing that no one involved understands tribal family structure, because a “village” in tribal society is an extended family.
But a village isn’t an extended family in the US, which makes the notion of trying to transfer the model to a population where outbreeding has been the norm for over a thousand years, tribalism is almost non-existent, and most people don’t live anywhere near their extended kin (and they are less closely related to their extended kin than people in a tribal society who’ve been marrying their first and second cousins for generations,) sound rather fraught with difficulties.
The rest of the post is meant to caution against seeing the world through rose-colored glasses. Here we have descendants of that same population (plus others) with very similar marriage and child-rearing norms, but the general reaction is completely opposite. What is a sign of the wonderfulness of tribal Africans is considered a sign of degeneracy and/or dysfunction here at home. (It is, of course, a total mystery how the same group of people could come up with the same childrearing and marriage norms while living in totally different times, places, and dominant cultures. /sarcasm)
Here in the US, we can see for ourselves rates of child abuse, malnutrition and neglect (and we think of this as a problem.) Until someone invents a time machine, we’ll have a much more difficult time getting a first-hand view of the pre-colonial Yoruba. (Heck, the vast majority of us don’t even have a first-hand view of the current Yoruba.) I’m sure some colonialists wrote accounts of what they saw when they arrived in the area–but any colonialist account that paints pre-colonialized people in a negative light is generally assumed to be biased and tainted by racism, which makes such accounts not-so-useful for supporting arguments in polite discussion. We’d need some kind of data, and data is often hard to come by.
Here are my own suspicions, though:
The tribal/village structure of these west African communities probably provided enough kin support that families could move children around like this and still have many of them reach adulthood. The system may, in fact, have been superior to just having the kids home with mom. Similar to modern day care, the extended kin network could look after the kids while mom worked in the fields or traveled to other cities to trade or do other work.
This system has low incentives for marital fidelity or monogamy, leading to an excess of males, which helped catapult the slave trade in the first place, though that is beyond the scope of this post.
However, rates of child abuse/neglect/abandonment/starvation/malnutrition were probably pretty high, just as they are in various communities today. These sorts of unpleasant details just don’t tend to show up in accounts that are trying to cast their subjects in a positive light, and frankly, horrible rates of infant mortality were so common in the past as to be unremarkable to many observers.
Here in the US, the system is less functional because, for starters, there are few African American men with large farms for their wives to raise crops on. People who would have been on the top of the social pile in Yorubaland, people who had all of the traits necessary to be a successful, thriving, happy member of Yoruba society may not have the traits necessary to out-compete, say, Taiwanese immigrants with their nose-to-the grindstone approach to getting their kids into medical school. Living in modern America is also much more expensive than living in a tribal village–the cost of housing, transportation (car), health care, etc., in the US will set you back many a small third world farm. Not to mention different policing standards.
Per capita GDP in modern Nigeria is $3,005.51. This is after tremendous growth; in 2000, it was only $377.50–I’m guessing oil is mostly responsible for the difference, because I recall hearing about a joint venture between the Russian gas company Gazprom and the Nigerian National Petroleum Corporation, so I’d caution against assuming that a ton of that money went to ordinary citizens. Looking backwards, pre-colonial per capita GDP was probably also pretty darn low, with most people engaged in subsistence agriculture.
Our perceptions of “wealth” are entirely dependent on how the other citizens of a society lives–a guy with a fifty acre farm can be “wealthy” in a third-world agricultural society, while “desperately poor” by first world standards. How he sees himself probably has a lot to do with how he sees his neighbors–is he on top of his society, or at the bottom?
I’ve been thinking about criminality, inspired by a friend’s musings on why didn’t he turn to crime during his decades of homelessness and schizophrenia. My answer was relatively simple: I think my friend just isn’t a criminal sort of person.
To clarify what I mean: let’s assume, similar to IQ, that each person has a “criminality quotient,” or CQ. Like IQ, one’s relative CQ is assumed to basically hold steady over time–that is, we assume that a person who rates “Low CQ” at 20 will also rate “Low CQ” at 30 and 60 and 10 years old, though the particular activities people do obviously change with age. Absolute CQ decreases for everyone past 35 or so.
A low CQ person has very little inclination to criminal behavior–they come to a full stop at stop signs, return excess change if a cashier gives them too much, don’t litter, and always cooperate in the Prisoner’s Dilemma. They are a bit dull, but they make good neighbors and employees.
A mildly CQ person is okay with a few forms of petty crime, like shoplifting, underage drinking, pot smoking, or yelling at people. They make fun friends but they litter and their party guests vomit in your bushes, making bad neighbors. You generally wouldn’t arrest these people, even though they do break the law.
A moderately CQ person purposefully does things that actually hurt people. They mug people or hold up conbinis; they get in fights. They mistreat animals, women, and children. They defect on the Prisoner’s Dilemma. They make shitty friends and shitty neighbors, because they steal your stuff.
A high CQ person is a murderer; they have no respect for human life.
A common explanation for criminality is that poverty causes it, hence my formerly homeless friend’s confusion. Obviously poverty can cause people to commit crimes they wouldn’t otherwise, like stealing food or sleeping in public parks. But in general, I suspect the causal arrow points the other way: criminality involves certain traits–like aggression and impulsivity–that make it hard to keep jobs, which makes criminal people poor.
Good people, reduced to poverty, remain good people. Bad people, suddenly given a bunch of money, remain bad people.
I’m not sure how one would test the first half of this without massive confounders or terrible ethics, but the latter half seems relatively easy, if you can just find enough petty criminals who’ve won the lottery and aren’t in prison–although now that I think about it, it seems like you could look at before and during data for people affected by essentially government-induced famines or poverty events. Just a friendly wager, but I bet Jews during the Holocaust had crime rates lower than American inner-city-lottery winners.
But “criminality” is a complex trait, so let’s unpack that a little. What exactly is it about criminality that makes it correlate with poverty?
Subtraits: aggression, impulsivity, low intelligence, lack of empathy, low risk aversion, high temporal discount.
Any of these traits by themselves wouldn’t necessarily induce criminality–people with Down’s Syndrome, for example, have low IQs but are very kind and have no inclination toward criminality (that I have ever heard of, anyway.) Many autistic people are supposed to be low in empathy, but do not desire to hurt others, and often have rather strong moral compasses. Low risk-aversion people can just do xtreme sports, and high-time preference people can be bad at saving money but otherwise harmless. Even aggressive people can channel their aggression into something useful if they are intelligent. Impulsive people might just eat too many cookies or dye their hair wacky colors.
But people who have more than one of these traits are highly likely to engage in criminal behavior.
However, these traits do not appear to be randomly distributed (thus, criminalitty is not randomly distributed.) Rather, they seem to belong to a complex or archetype, of which “criminality” is one manifestation.
This complex has probably been more or less the human default for most of human history. After all, chimps are not especially known for not tearing each other’s faces off. And saving up wealth for tomorrow instead of eating it today doesn’t make sense if the tribe next door can just come in and steal it. In a violent, chaotic, pre-state tribal world, “criminality” is survival.
To summarize, briefly, Frost proposes that the precipitous drop in W. European crime levels over the past thousand years or so has been due to states executing criminals, thus removing “criminal” genes from the genepool. The sticky questions are whether the drop in crime actually happened when and where his theory suggests, and if enough people were actually killed to make a dent in criminality.
I suspect that Frost is at least partially right–many people who might have had children were executed instead–but there is another factor to consider:
A land where criminals are executed is a land where criminals are already useless or less than useless. They have gone from assets to nuisances (horrible ones, but nuisances nonetheless), to be swatted like flies.
In a land where criminals are useful, we do not call them criminals; we call them heroes. Is Che Guevara a murderer or a freedom fighter? Depends on who you ask. Is the man who crushes enemies, drives off their cattle and hears the lamentations of their women a hero or a butcher? In Mongolia, there are statues of Genghis Khan and he is regarded as the father of Mongolia. Vlad Tepes is a here in Romania.
In a land where marauding tribes are no longer a concern, you have no need for violent tribesmen of your own. In a land where long term saving is technically possible, people who do can get ahead. In these places, the criminality complex is no longer favored, and even mildly CQ people–too mild to get executed–get out-competed by people with lower CQs.
However, I do caution that recent data suggests this trend may have reversed, and criminals may now have more children than non-criminals. I wouldn’t count on anything being eternal.
Looking back over my own thought on the subject over the years, I think this is essentially reversal of sorts. Our legal system is built on the Enlightenment (I think) idea of redeemability–that criminals can be changed; that we punish the individual criminal act, not the “criminality” of the offender. This may not be so in the death penalty or for certain egregiously heinous acts like child rape, but in general, there are principles like “no double jeopardy” and “people who have served their time should be allowed to re-integrate into society and not be punished anymore.” The idea of CQ basically implies that some people should be imprisoned irrespective of whatever crimes they’ve been convicted of, simply because they’re going to commit more crimes.
There’s a conflict here, and it’s easy to see how either view, taken to extremes, could go horribly wrong. Thus it is probably best to maintain a moderate approach to imprisonment, while trying to ensure that society is set up to encourage lawful behavior and not reward criminality.
Your thoughts and reflections are encouraged/appreciated.
I think there’s a book by the title of “The Art Instinct.” I haven’t read it.If anyone knows of any good sources re human genetics, art, and history, I’d be grateful.
As far as I know, some kind of art exists in all human populations–even Neanderthals and other non-AM primates like homo Erectus, I think, appear to have had occasional instances of some form of art. (I am skeptical of claims that dolphins, elephants, and chimps have any real ability to do art, as they do not to my knowledge produce art on their own in their natural habitats; you can also teach a gorilla to speak in sign language, but it would be disingenuous to suggest that this is something that gorillas naturally do.)
However, artistic production is clearly not evenly distributed throughout the planet. Even when we only consider societies that had good access to other societies’ inventions and climates that didn’t destroy the majority of art within a few years of creation, there’s still a big difference in output. Europe and China are an obvious comparison; both regions have created a ton of beautiful art over the years, and we are lucky enough that much of it has been preserved. But near as I can tell, Europeans have produced more. (People in the Americas, Australia, etc., did not have historical access to Eurasian trade routes and so had no access to the pigments and paints Europeans were using, but people in the Middle East and China did.)
Europeans did not start out with a lot of talent; Medieval art is pretty shitty. European art was dominated by pictures of Jesus and Mary to an extent that whole centuries of it are boring as fuck. Even so, they produced a lot of it–far more than the arguably more advanced cultures of the Middle East, where drawing people was frowned upon, and so painting and sculpture had a difficult time getting a foothold.
I speculate that during this thousand years or so of shitty art, the Catholic Church and other buyers of religious paintings effectively created a market that otherwise wouldn’t have existed otherwise (especially via their extensive taxation scheme that meant all of Europe was paying for the Pope to have more paintings. The (apparently insatiable) demand for religious paintings meant employment for a lot of artists, which in turn meant the propagation of whatever genes make people good at art (as well as whatever cultural traits.) After 700 or a thousand years or so, we finally see the development of art that is actually good–art that suggests some extraordinary talent on the part of the artist.
I further speculate that Chinese art has been through a similar but slightly less extensive process, due to less historical demand, due to the historical absence of an enormous organization with lots of money interested in buying lots of art. Modern life may provide very different incentives, of course.
Thus the long period of tons of boring art may have been a necessary precursor to the development of actually good art.
The habit of treating culture like some totally independent variable in considering human outcomes is the sort of thinking that makes me want to bang my head on the keyboard. Every time someone says, “[Person] isn’t really [negative trait], they just come from a [negative trait] background that made them act [negative trait],” I want to yell, “Where the hell do you think that background came from? The magical culture fairy?” You get [negative trait] cultures because they are full of people who have those traits. (And they might even think those are positive traits, btw.)
Hilariously, people from highly organized cultures seem compelled to create organizations wherever they go. The converse, unfortunately, is also true.
Impulsive people die younger than non-impulsive people, so much so that how your teacher rated you as a student back when you were a kid is actually a decent predictor of your later mortality.
. The first two probable reasons for this are obvious:
1. They do risky things like drive too fast, hold up conbinis, or take drugs, all of which can actually kill you.
2. They engage in behaviors with potentially negative long-term consequences, like eating too many donuts or failing out of school and having to do a crappy job with bad safety precautions.
But the third reason is less obvious, unless you’re Jayman:
3. There is no point to planning for the future if you’re going to die young anyway.
Some people come from long-lived people. They have genes that will help keep them alive for a very long time. Some people don’t. These people live fast. They walk earlier, they mature earlier, they get pregnant earlier, and they die earlier. Everything they do is on a shorter timeframe than the long-lived people. To them, they aren’t impulsive–everyone else is just agonizingly slow.
Why save for retirement if you’re not going to live that long?
Impulsive people are like normal people, just sped up.
I’ve been trying for a while to figure out when atheism became mainstream in the West. Sometimes I answer, “Around the end of the English Civil War,” (1650) and sometimes I answer, “Late 1980s/early 1990s.”
Medieval Europeans seem to have been pretty solidly Christian–probably about as Christian as modern Muslims are Muslim.
Modern Westerners are highly atheistic–even many of the “Christians”. So what happened?
I speculate that the upper classes in France, Britain, and the Colonies (and probably that-which-would-become-Germany and a few other places I’m less familiar with, like the Netherlands,) were largely atheistic by the 1700s. Look at the writings of the Enlightenment philosophers, the behavior of the French nobility, the English distrust of any kind of religious “enthusiasm,” German bishops actively encouraging Jewish settlement in their cities and attempting to protect the Jews from angry peasant mobs, various laws outlawing or greatly limiting religious power passed during the French Revolution, the deism of the Founding Fathers, etc.
By contrast, the lower classes in NW Europe and especially America retained their belief for far longer–a few isolated pockets of belief surviving even into the present. For example, see the Pilgrims, the Counter-Revolution in the Vendee, maybe German peasants, televangelists in the 80s, blue laws, and Appalachian snake handlers in the ’50s, etc.
So how did that happen? I propose that the upper class and lower class followed different evolutionary trajectories (due to different conditions), with strong religiosity basically already selected out by the 1700s, meaning the relevant selection period is roughly 500-1700, not post-1700s.
During this time, the dominant religion was Catholicism, and Catholicism has generally forbade its priests, monks, nuns, etc., from getting married and having children since somewhere abouts the 300s or 400s. (With varying levels of success.)
Who got to be an official member of the Church hierarchy? Members of the upper class. Peasants couldn’t afford to do jobs that didn’t involve growing food, and upper class people weren’t going to accept peasants as religious authorities with power over their eternal souls, anyway. Many (perhaps most) of the people who joined the church were compelled at least in part by economic necessity–lack of arable land and strict inheritance laws meant that a family’s younger sons and daughters would not have the resources for marriage and family formation, anyway, and so these excess children were shunted off to monasteries.
There was another option for younger sons: the army. (Not such a good option for younger daughters.) Folks in the army probably did have children; you can imagine the details.
So we can imagine that, given the option between the army and the Church, those among the upper class with more devote inclinations probably chose the Church. And given a few hundred years of your most devote people leaving no children (and little genetic inflow from the lower classes,) the net result would be a general decrease in the % of genes in your population that contribute to a highly religious outlook.
(This assumes, of course, that religiosity can be selected for. I assume it can.)
Since the lower classes cannot join the Church, we should see much more religiosity among them. (Other factors affected the lower classes, just not this one.) If anything, one might speculate that religiosity may have increased reproductive success for the lower classes, where it could have inspired family-friendly values like honesty, hard work, fidelity, not being a drunkard, etc. A hard-working, moderately devout young man or woman may have been seen as a better potential spouse by the folks arranging marriages than a non-devout person.
Religiosity probably persisted in the US for longer than in Europe because:
1. More religious people tended to move from Europe to America, leaving Europe less religious and America more;
2. The beneficial effects of being a devout person who could raise lots of children were enhanced by the availability of abundant natural resources, allowing these people to raise even more children. NW Europe has had very little new land opened up in the past thousand years, limiting everybody’s expansion. The European lower classes historically did not reproduce themselves (horrific levels of disease and malnutrition will do that to you), being gradually replaced by downwardly-mobile upper classes. (There are probably regions in which the lower classes did survive, of course;)
3. By the time we’re talking about America, we’re talking about Protestant denominations rather than Catholicism, and Protestants generally allow their clergy to marry.
1. Evolution is real. Incentives are real. Math is real. Their laws are as iron-clad as gravity’s and enforced with the furor of the Old Testament god. Disobey, and you will be eliminated.
2. Whatever you incentivize, you will get. Whatever you don’t incentivize, you will not get. Create systems that people can cheat, and you create cheaters. If criminals have more children than non-criminals, then the future will be full of criminals. Create systems that reward trust and competence, and you will end up with a high trust, competent system.
3. Society is created by people, through the constant interaction of the basic traits of the people in it and the incentives of its systems.
4. Morality is basically an evolved mental/social toolkit to compel you to act in your genetic self-interest. Morality does not always function properly in evolutionary novel situations, can be hijacked, and does not function similarly or properly in everyone, but people are generally capable of using morality to good ends when dealing with people in their trust networks.
Therefore:
5. Whatever you think is wrong with the world, articulate it clearly, attempt to falsify your beliefs, and then look for practical, real-world solutions. This is called science, and it is one of our greatest tools.
6. Create high-trust networks with trustworthy people. A high trust system is one where you can be nice to people without fear of them defecting. (Call your grandma. Help a friend going through a rough time. Don’t gossip.) High trust is one of the key ingredients necessary for everything you consider nice in this world.
7. Do not do/allow/tolerate things/people that destroy trust networks. Do not trust the untrustworthy nor act untrustworthy to the trusting.
8. Reward competency. Society is completely dependent on competent people doing boring work, like making sure water purification plants work and food gets to the grocery store.
9. Rewarding other traits in place of competency destroys competency.
10. If you think competent people are being unjustly excluded, find better ways to determine competency–don’t just try to reward people from the excluded pools, as there is no guarantee that this will lead to hiring competent people. If you select leaders for some other trait (say, religiosity,) you’ll end up with incompetent leaders.
11. Act in reality. The internet is great for research, but kinda sucks for hugs. Donating $5 to competent charities will do more good than anything you can hashtag on Twitter. When you need a friend, nothing beats someone who will come over to your house and have a cup of tea.
12. Respond to life with Aristotelian moderation: If a lightbulb breaks, don’t ignore it and don’t weep over it. Just change the lightbulb. If someone wrongs you, don’t tell yourself you deserved it and don’t escalate into a screaming demon. Just defend yourself and be ready to listen to the other person if they have an explanation.
(Because watching other people say that thing you were saying and be like ‘omg I was saying that’ and then they give it their own twist and you are like ‘oh yes I see where this is going and it gets back to the morality model’ and then the joy at how much fun it is.)
(Guys guys we are talking about memes, okay. And the big question brought up by the part I quoted is, of course, What are the long-term effects of changing transmission pathways?)
Quote:
“How Transmission Pathways Matter
In my outline, I mentioned that the transmission pathway – vertical or horizontal – matters a great deal for the content and friendliness of transmitted cultural items.
In biology, there is already support for this model. Parasitic entities like bacteria that are limited to vertical transmission – transmission from parent to child only – quickly evolve into benign symbiosis with the host, because their own fitness is dependent on the fitness of the host entity. But parasitic entities that may accomplish horizontal transmission are not so constrained, and may be much more virulent, extracting high fitness costs from the host. (See, e.g., An empirical study of the evolution of virulence under both horizontal and vertical transmission, by Stewart, Logsdon, and Kelley, 2005, for experimental evidence involving corn and a corn pathogen.)
As indicated in an earlier section, ancient cultural data is very tree-like, indicating that the role of horizontal transmission has been minimal. However, the memetic technologies of modernity – from book printing to the internet – increased the role of horizontal transmission. I have previously written that the modern limited fertility pattern was likely transmitted horizontally, through Western-style education and status competition by limiting fertility (in The history of fertility transitions and the new memeplex, Sarah Perry, 2014). The transmission of this new “memeplex” was only sustainable by horizontal transmission; while it increases the individual well-being of “infected carriers,” it certainly decreases their evolutionary fitness. …”
Okay, right. So your meme-mitochondria will most likely protect you from dying, but don’t much give a shit if you end up killing people who are not-you or at least don’t share your genes. And meme-viruses will try to get you to not kill society at large (which is busy propagating them,) but don’t particularly care if they kill you.
Reflections:
1. Will modern mass-media destroy itself by accidentally destroying the people that use it? Can mass-media be a stable, long-term part of the human cultural/technological toolkit?
2. Does modern mass-media create an actually different moral meme-environment from the vast majority of the human past? Is this good/bad/neutral?
3. Will we evolve to be adapted to this meme-environment, say, by people who believe that Western Education is Sin kidnapping girls, selling them as brides, and then massively out-breeding people who “Lean In”?
The short version of this is if you could measure the relative gender dimorphism of people–say, by comparing siblings–and compare that to their IQ, I wager the more androgynous people would come out smarter.
This began with Jayman’sPioneer Hypothesis, which basically posits that frontier or pioneer environments will select for a certain suite of traits like aggressiveness and early menarche–that is, traits that allow them to quickly take over and fill up the land.
Based on this initial theory, I hypothesized that East Germany was settled later than West Germany–which turns out to be actually true. I was pretty stoked about that.
Anyway, earlier menarche => lower IQ, (I’m pretty sure this is well-documented) as shortening childhood = shortening the period of time your brain has to develop.
Raising the age of menarche gives your brain more time to develop. In environments where family formation has historically been difficult, ie, very densely populated areas with little free land available where people might have to wait for their relatives to die before they can get their own farm, have likely evolved people who hit menarche later (after all, there’s no need for early menarche in such an environment. See also: cave fish losing pigment because it’s not useful.) The opposite side of this coin is later menopause, but since these folks have lower fertility overall, I don’t think that’s a big factor.
Anyway, later menarche => more time for brains to develop => higher IQ.
I suppose the speculative part here is that late menarche populations are more androgynous and early menarche populations are less androgynous. This probably wouldn’t hold for all populations, but anecdotal experience with Americans seems consistent–eg, MIT students seem highly androgynous, while dumb people from elsewhere seem much more dimorphic. Actually, many of the extremely high-IQ people I’ve known have been trans*, as opposed to none of the dumb ones. Among dumb people, it seems perfectly normal for women to socialize in all-female groups, especially for activities like shopping and discussing celebrity gossip, while men find it normal to socialize in all-male groups, especially for activities like watching other grown men play keep-away (sports), drinking beer, and playing poker. (To be fair, though, I don’t have a lot of first-hand experience in the world of the dumb.)
Historically pioneer and historically densely settled populations probably end up with different notions of what is “normal” dimorphism, leading to lots of disputes as each side claims that their experiences are normal, and not realizing that the other sides’ experiences are normal for them, too.