The Neurology of Cross-Cultural Authority? pt 2

As we were discussing yesterday, I theorize that people have neural feedback loops that reward them for conforming/imitating others/obeying authorities and punish them for disobeying/not conforming.

This leads people to obey authorities or go along with groups even when they know, logically, that they shouldn’t.

There are certainly many situations in which we want people to conform even though they don’t want to, like when my kids have to go to bed or buckle their seatbelts–as I said yesterday, the feedback loop exists because it is useful.

But there are plenty of situations where we don’t want people to conform, like when trying to brainstorm new ideas.

Under what conditions will people disobey authority?

As we previously discussed, using technology to create anonymous, a-reputational conversations may allow us to avoid some of the factors that lead to group think.

But in person, people may disobey authorities when they have some other social systtem to fall back on. If disobeying an authority in Society A means I lose social status in Society A, I will be more likely to disobey if I am a member in good standing in Society B.

If I can use my disobedience against Authority A as social leverage to increase my standing in Society B, then I am all the more likely to disobey. A person who can effectively stand up to an authority figure without getting punished must be, our brains reason, a powerful person, an authority in their own right.

Teenagers do this all the time, using their defiance against adults, school, teachers, and society in general to curry higher social status among other teenagers, the people they actually care about impressing.

SJWs do this, too:



I normally consider the president of Princeton an authority figure, and even though I probably disagree with him on far more political matters than these students do, I’d be highly unlikely to be rude to him in real life–especially if I were a student he could get expelled from college.

But if I had an outside audience–Society B–clapping and cheering for me behind the scenes, the urge to obey would be weaker. And if yelling at the President of Princeton could guarantee me high social status, approval, job offers, etc., then there’s a good chance I’d do it.

But then I got to thinking: Are there any circumstances under which these students would have accepted the president’s authority?

Obviously if the man had a proven track record of competently performing a particular skill the students wished to learn, they might follow hi example.

Or not.

If authority works via neural feedback loops, employing some form of “mirror neurons,” do these systems activate more strongly when the people we are perceiving look more like ourselves (or our internalized notion of people in our “tribe” look like, since mirrors are a recent invention)?

In other words, what would a cross-racial version of the Milgram experiment look like?

Unfortunately, it doesn’t look like anyone has tried it (and to do it properly, it’d need to be a big experiment, involving several “scientists” of different races [so that the study isn’t biased by one “scientist” just being bad at projecting authority] interacting with dozens of students of different races, which would be a rather large undertaking.) I’m also not finding any studies on cross-racial authority (I did find plenty of websites offering practical advice about different groups’ leadership styles,) though I’m sure someone has studied it.

However, I did find cross-racial experiments on empathy, which may involve the same brain systems, and so are suggestive:

From Racial Bias Reduces Empathic Sensorimotor Resonance with Other-Race Pain, by Avenanti et al:

Using transcranial magnetic stimulation, we explored sensorimotor empathic brain responses in black and white individuals who exhibited implicit but not explicit ingroup preference and race-specific autonomic reactivity. We found that observing the pain of ingroup models inhibited the onlookers’ corticospinal system as if they were feeling the pain. Both black and white individuals exhibited empathic reactivity also when viewing the pain of stranger, very unfamiliar, violet-hand models. By contrast, no vicarious mapping of the pain of individuals culturally marked as outgroup members on the basis of their skin color was found. Importantly, group-specific lack of empathic reactivity was higher in the onlookers who exhibited stronger implicit racial bias.

From Taking one’s time in feeling other-race pain: an event-related potential investigation on the time-course of cross-racial empathy, by Sessa et al.:

Using the event-related potential (ERP) approach, we tracked the time-course of white participants’ empathic reactions to white (own-race) and black (other-race) faces displayed in a painful condition (i.e. with a needle penetrating the skin) and in a nonpainful condition (i.e. with Q-tip touching the skin). In a 280–340 ms time-window, neural responses to the pain of own-race individuals under needle penetration conditions were amplified relative to neural responses to the pain of other-race individuals displayed under analogous conditions.

In Seeing is believing: neural mechanisms of action-perception are biased by team membership, Molenberghs et al. write:

In this study, we used functional magnetic resonance imaging (fMRI) to investigate how people perceive the actions of in-group and out-group members, and how their biased view in favor of own team members manifests itself in the brain. We divided participants into two teams and had them judge the relative speeds of hand actions performed by an in-group and an out-group member in a competitive situation. Participants judged hand actions performed by in-group members as being faster than those of out-group members, even when the two actions were performed at physically identical speeds. In an additional fMRI experiment, we showed that, contrary to common belief, such skewed impressions arise from a subtle bias in perception and associated brain activity rather than decision-making processes, and that this bias develops rapidly and involuntarily as a consequence of group affiliation. Our findings suggest that the neural mechanisms that underlie human perception are shaped by social context.

None of these studies shows definitevely whether or not in-group vs. out-group biases are an inherent feature of neurological systems, or Avenanti’s finding that people were more empathetic toward a purple-skinned person than to a member of a racial out-group suggests that some amount of learning is involved in the process–and that rather than comparing people against one’s in-group, we may be comparing them against our out-group.

At any rate, you may get similar outcomes either way.

In cases where you want to promote group cohesion and obedience, it may be beneficial to sort people by self-identity.

In cases where you want to guard against groupthink, obedience, or conformity, it may be beneficial to mix up the groups. Intellectual diversity is great, but even ethnic diversity may help people resist defaulting to obedience, especially when they know they shouldn’t.

A study by McKinsey and Company suggests that mixed-race companies outperform more homogenous companies:

web_diversity_matters_ex_mk_v2

but I can find other studies that suggest the opposite, eg, Women Don’t Mean Business? Gender Penalty in Board Appointments, by Isabelle Solal:

Using data from two panel studies on U.S. firms and an online experiment, we examine investor reactions to increases in board diversity. Contrary to conventional wisdom, we find that appointing female directors has no impact on objective measures of performance, such as ROA, but does result in a systematic decrease in market value.

(Solal argues that investors may perceive the hiring of women–even competent ones–as a sign that the company is pursuing social justice goals instead of money-making goals and dump the stock.)

Additionally, diverse companies may find it difficult to work together toward a common goal–there is a good quantity of evidence that increasing diversity decreases trust and inhibits group cohesion. EG, from The downside of diversity:

IT HAS BECOME increasingly popular to speak of racial and ethnic diversity as a civic strength. From multicultural festivals to pronouncements from political leaders, the message is the same: our differences make us stronger.

But a massive new study, based on detailed interviews of nearly 30,000 people across America, has concluded just the opposite. Harvard political scientist Robert Putnam — famous for “Bowling Alone,” his 2000 book on declining civic engagement — has found that the greater the diversity in a community, the fewer people vote and the less they volunteer, the less they give to charity and work on community projects. In the most diverse communities, neighbors trust one another about half as much as they do in the most homogenous settings. The study, the largest ever on civic engagement in America, found that virtually all measures of civic health are lower in more diverse settings.

As usual, I suspect there is an optimum level of diversity–depending on a group’s purpose and its members’ preferences–that helps minimize groupthink while still preserving most of the benefits of cohesion.

The architecture of communication

I was thinking today about the Bay of Pigs fiasco–a classic example of “groupthink” in action. A bunch of guys supposedly hired for their ability to think through complex international scenarios somehow managed to use the biggest military in the world to plan an invasion that was overwhelmingly crushed within 3 days.

In retrospect,the Bay of Pigs Invasion looks like an idiotic idea; everyone should have realized this was going to be a colossal disaster. But ahead of time, everyone involved thought it was a great idea that would totally succeed, including a bunch of people who got killed during the invasion.

Groupthink happens when everyone starts advocating for the same bad ideas, either because they actually think they’re good ideas, or because individuals are afraid to speak up and say anything counter to the perceived group consensus.

There’s not much you can do abut dumb, but consensus we can fight.

The obvious first strategy is to just try to get people to think differently.

Some people are really agreeable people who are naturally inclined to go along with others and seek group harmony. These folks make great employees, but should not be the entirety of a decision-making body because they are terrible at rejecting bad ideas.

By contrast, some people are assholes who like to be difficult. While you might not want to look specifically for “assholes,” the inclusion of at least one person with a reputation for saying unpopular things in any decision-making group will ensure that after a disaster, there will be at least one person there to say, “I told you so.”

You may also be able to strategically use people with different backgrounds/training/experiences/etc. Different schools of thought often come up with vastly different explanations for how the world works, and so can bring different ideas to the table. By contrast, a group that is all communists or all libertarians is likely to miss something that is obvious to folks from other groups; a group that’s all ivy league grads or all farmers is likewise prone to miss something.

Be careful when mixing up your group; getting useless people into the mix just because they have some superficial trait that makes them seem different from others will not help. You want someone who actually brings a different and valuable perspective or ideas to the group, not someone who satisfies someone else’s political agenda about employment.

Alternatively, if you can’t change the group’s membership, you may be able to break down consensus by isolating people so that they can’t figure out what the others think. Structure the group so that  members can’t talk to each other in real life, or if that’s too inefficient, make everyone submit written reports on what they’re thinking before the talking begins. Having one member or part of the group that is geographically isolated–say, having some of your guys located in NY and some in Chicago–could also work.

But probably the easiest way to remove the urge to go along with bad ideas for the sake of group harmony is to remove the negative social repercussions for being an asshole.

Luckily for us, the technology for this already exists: anonymous internet messageboards. When people communicate anonymously, they are much more likely to say unpopular, assholish things like “your invasion plan is fucking delusional,” than they are in public, where they have to worry about being un-invited to Washington cocktail parties.

Anonymous is important. In fact, for truly good decision-making, you may have to go beyond anonymous to truly a-reputational posting with no recognizable names or handles.

The internet helpfully supplies us with communities with different degrees of anonymity. On Facebook, everyone uses their real names; Twitter has a mix of real names and anonymous handles where people still build up reputations; Reddit and blogs are pretty much all pseudonyms; 8Chan is totally anonymous with no names and no ability to build up a reputation.

Facebook is full of pretty messages about how you should Be Yourself! and support the latest good-feel political cause. Since approximately everyone on FB have friended their boss, grandmother, dentist, and everyone else they’ve ever met, (and even if you don’t, employers often look up prospective applicants’ FB profiles to see what they’ve been up to,) you can only post things on FB that won’t get you in trouble with your boss, grandmother, and pretty much everyone else you know.

I have often felt that Facebook is rather like a giant meta-brain, with every person an individual neuron sending messages out to everyone around them, and that so long as all of the neurons are messaging harmony, the brain is happy. But when one person is out of sync, sending out messages that don’t mesh with everyone else’s, it’s like having a rogue brain cell that’s firing at it’s own frequency, whenever it wants to, with the result that the meta-brain wants to purge the rogue cell before it causes epilepsy.

I know my position in the Facebook meta-brain, and it’s not a good one.

The Facebook architecture leads quickly to either saying nothing that could possibly offend anyone, (thus avoiding all forms of decision-making all together,) or competitive morality spirals in which everyone tries to prove that they are the best and most moral person in the room. (Which, again, does not necessarily lead to the best decision-making; being the guy who is most anti-communist does not mean you are the guy with the best ideas for how to overthrow Castro, for example.)

Twitter and bloggers are anonymous enough to say disagreeable things without worrying about it having major effects on their personal lives; they don’t have to worry about grandma bringing up their blog posts at Thanksgiving dinner, for example, unless they’ve told grandma about their blog. However, they may still worry about alienating their regular readers. For example, if you run a regular Feminist blog, but happen to think the latest scandal is nonsense and the person involved is making it all up, it’s probably not worth your while to bring it up and alienate your regular readers. If you’re the Wesleyan student paper, it’s a bad idea to run any articles even vaguely critical of #BlackLivesMatters.

Places like 8Chan run completely anonymously, eschewing even reputation. You have no idea if the person you’re talking to is the same guy you were talking to the other day, or even just a few seconds ago. Here, there are basically no negative repercussions for saying you think Ike’s invasion plan is made of horse feces. If you want to know all the reasons why your idea is dumb and you shouldn’t do it, 8Chan is probably the place to ask.

Any serious decision-making organization can set up its own totally anonymous internal messageboard that only members have access to, and then tell everyone that they’re not allowed to take real-life credit for things said on the board. Because once you’ve got anonymous, a-reputational, a-name communication, your next goal is to keep it that way. You have to make it clear–in the rules themselves–to everyone involved that the entire point of the anonymous, a-name messageboard is to prevent people from knowing whose ideas are whose, so that people can say things that disrupt group harmony without fear. Second, you have to make sure that no one tries to bypass the no reputations by just telling everyone who they are. You can do this by having strong rules against it, and/or by frequently claiming to be everyone else and stating in the rules that it is perfectly a-ok to pretend to be everyone else, precisely for this reason.

Obviously moderation is a fast route to groupthink; your small decision-making messageboard really shouldn’t need any moderation beyond the obvious, “don’t do anything illegal.”

The biggest reason you have to make this extra clear from the get-go is that anonymous, a-reputational boards seem to be anathema to certain personality types–just compare the number of women on Facebook with the number of women on Reddit–so any women in your decision-making group may strongly protest the system. (So may many of the men, for that matter.)

Personal experience suggests that men and women use communication for different ends. Men want to convey facts quickly and get everything done; women want to build relationships. Using real names and establishing reputations lets them do that. Being anonymous and anomized and unable to carry on a coherent conversation with another person from minute to minute makes this virtually impossible. Unfortunately, building relationships gets you right back to where we started, with people neglecting to say uncomfortable things in the interests of maintaining group harmony.

This basic architecture of communication also seems to have an actual effect on the kinds of moral and political memes that get expressed and thrive in each respective environment. As I mentioned above, Facebook slants strongly to the left; when people’s reputations are on the line, they want to look like good people, and they make themselves look like good people by professing deep concern for the welfare of others. As a result, reading Facebook is like jumping onto Cthulhu’s back as he flies through the Overton Window. By contrast, if you’ve ever wandered into /pol/, you know that 8Chan slants far to the right. If you’re going to post about your love of Nazis, you don’t want to do it in front of your boss, your grandma, and all of your friends and acquaintances. When you want to be self-interested, 8Chan is the place to go; without reputation, there’s no incentive to engage in holiness spirals.

I don’t know what such a system do to corporate or political decision making of the Bay of Pigs variety, but if your want your organization to look out for its own interests (this is generally accepted as the entire point of an organization,) then it may be useful to avoid pressures that encourage your organization to look out for other people’s interests at the expense of its own.