Algorithmic Optimization pt 2


There is nothing exceptional about the slowed-down Nancy Pelosi video, and nothing terribly exceptional in reporters saying uninformed things about subjects they aren’t well versed in.

The significance lies far more behind the scenes. From Marketwatch: Facebook Decided to Rethink Policies on Doctored Media two days Before Pelosi Video.

Wow, that is awfully coincidental that Facebook just happened to be thinking about changing these policies anyway right before a doctored video just happened to make it onto the news, prompting millions of people to pressure Facebook into doing exactly what Facebook already wanted to do.

Don’t be fooled: this isn’t spontaneous. Oh, sure, many of the people at the low end, like reporters, are just doing their job of reading the news they have been given into the camera, but there is plenty of active coordination going on behind the scenes by organizations like Facebook and the Democratic Party.

The Democrats realized sometime around 2016 that they have a meme problem. People on the internet thought Trump was funny and Democrats were boring sticks in the mud. People on the internet made videos about Hillary Clinton’s health, the European migration crisis, and other subjects the Dems didn’t approve of.

They don’t want this happening again.

So they are laying the groundwork now to re-write the policies and algorithms to strategically remove problematic conservative voices from the fray. Alex Jones has already been kicked off Youtube, Facebook, PayPal, etc. FB has taken a particularly hard line, threatening not just to delete Jones’s videos, but any account that posts them (excepting those that post them in order to criticize them).

Even Visa and Mastercard are getting in on the act, cutting off banking services to organizations whose political views they don’t like.

The ostensible reason for Alex Jones’s deplatforming is his supposed spread of conspiracy theories post-Sandy Hook (I say “supposedly” because I have not seen the clips in question,) but it is obvious that 1. these concerns surfaced years after Sandy Hook and 2. no one has deplatformed media outlets that pushed the “Iraq has WMDs” conspiracy theory that cost the US trillions of dollars and lead to the deaths of thousands (millions?) of people.

This has all been accompanied by a basic shift in how media platforms and infrastructure are viewed.

The traditional conception is that these are platforms, not publishers, and thus they merely provide something akin to infrastructure without much say over how you, the user, put it to use. For example, the electric company provides electricity to anyone who pays for it, and even if you use your electricity to warm the cages of your illegally gotten, exotic, endangered reptile collection, the electric company will generally keep providing you with electricity. The electric company does not have to approve of what you do with the electricity you buy, and if you break the law with their electricity, they see it as the state’s job to stop you.

A publisher and a platform, like Facebook, traditionally enjoyed different legal rights and safeguards. A publisher checks and decides to publish every single item they put out, and so is held to be responsible for anything they print. A platform merely provides a space where other people can publish their own works, without supervision. Platforms do not check posts before they go up, (as a practical matter, they can’t,) and thus are generally only held legally responsible for taking down material on their site if someone has notified them that it is in violation of some law.

EG, suppose someone posts something really illegal, child porn, on Facebook. If Facebook is a “publisher,” it is now publishing child porn and is in big legal trouble. But since Facebook is just a platform, it deletes the videos and is legally in the clear. (The poster may still go to prison, of course.)

The conceptual shift in recent years has been to portray platforms as “allowing” people to come in and use their platforms, and then ask why they are allowing such shitty people to use their platform. No one asks why the electric company allows you to use their electricity to raise your army of bio mutant squids, but they do ask why Facebook allows right-wingers to be on the platform at all.

This is treating platforms like publishers, and they are absolutely jumping into it with both feet.

Let’s skip forward a bit in the video to the lady in white to see this in action:

It’s been viewed millions of times on the internet, but it’s not real… This is really scary, and not going away, and I’m fearful this is going to be all over the 2020 election.

You know, that’s how I felt when libs kept bringing up Harry Potter in the context of the last election, but for some reason taking a children’s fantasy story about wizards is acceptable in political discourse but slowed-down videos aren’t.

And who is responsible for monitoring this stuff, taking it down? Facebook, Youtube took it down, but after how long?

Other commentator… At Facebook it’s still up because Facebook allows you to do a mock video…

The correct answer is that no one is responsible for monitoring all of Facebook and Youtube’s content, because that’s impossible to do and because Facebook isn’t your mommy. If you want Facebook to be your mom and monitor everything you consume, just stop talking and leave the adults alone.

CNN asks:

So Monika, in the wake of the 2016 election obviously Facebook has repeatedly told Congress and the American people that yo’re serious about fighting disinformation, fake news, and yet this doctored video which I think even your own fact checkers acknowledge is doctored of speaker Pelosi remains on your platform. Why?

Like the previous guy already said, because it’s not against Facebook’s TOS. Of course Anderson Cooper already knows this. He doesn’t need to get an actual Facebook representative on his show to find out that “funny reaction videos are allowed on Facebook.” And if Facebook were serious about maintaining its neutrality as a platform, not a publisher, it would not have bothered to send anyone to CNN–it would have just left matters at a blanket statement that the video does not violate the TOS.

The Facebook Lady (Monika,) then explains how Facebook uses its algorithms to demote and demonitize content the “experts” claim is false. They’re proud of this and want you to know about it.

So misinformation that doesn’t promote violence, but misinformation that portrays the third most powerful politician in the country as a drunk or as somehow impaired, that’s fine? 

Oh no, quick, someone save the third most powerful person in the country from people saying mean things about her on the internet! We can’t have those disgusting peasants being rude to their betters!

Anderson Cooper is infuriatingly moronic; does not “logically understand” why Facebook leaves up videos that don’t violate the TOS but suggests that Facebook should “get out of the news business” if it can’t do it well.

Facebook isn’t in the “news business” you moron, because Facebook is a platform, not a publisher. You’re in the news business, so you really ought to know the difference.

If you don’ know the difference between Facebook and a news organization, maybe you shouldn’t be in the news business.

That said, of course Anderson Cooper actually understands how Facebook works. This whole thing is a charade to give Facebook cover for changing its policies under the excuse of “there was public outrage, so we had to.” It’s an old scam.

So to summarize:

  1. The Dems want to change the algorithms to favor themselves, eg, Facebook decided to rethink policies on doctored media two days before Pelosi video, but don’t want to be so obvious about it the Republicans fight back
  2. Wait for a convenient excuse, like a slowed-down video, then go into overdrive to convince you that Democracy is Seriously Threatened by Slow Videos, eg, Doctored Videos show Facebook willing enables of the Russians; Doctored Pelosi video is leading tip in coming disinformation battle
  3. Convince Republican leadership to go along with it because, honestly, they’re morons: Congress investigating deepfakes after doctored Pelosi video, report says
  4. Deplatform their enemies
  5. Rinse and repeat: Vox Adpocalypse


One final note: even though I think there is coordinated activity at the top/behind the scenes at tech companies and the like, I don’t think the average talking head you see on TV is in on it. Conspiracies like that are too hard to pull off; rather, humans naturally cooperate and coordinate their behavior because they want to work together, signal high social status, keep their jobs, etc.

On the rise of mental illness on college campuses


It’s not just at Middlebury. As Sailer notes in his review of Haidt’s The Coddling of the American Mind

A remarkable fraction of current articles in The New York Timesand The New Yorker include testimony that the author feels emotionally traumatized, which is stereotypically attributed to the malevolence of Donald Trump. But the evidence in The Coddling of the American Mind points to the second Obama administration as being the era when the national nervous breakdown began.

The authors cite alarming evidence of a recent increase in emotional problems. For example, the percentage of college students who said they suffered from a “psychological disorder” increased among males from 2.7 percent in 2012 to 6.1 percent by 2016 (a 126 percent increase). Over the same four years, the percentage of coeds who saw themselves as psychologically afflicted rose from 5.8 percent to 14.5 percent (150 percent growth).

Sailer blames the Obama administration, eg, the DOE releasing new definitions of “sexual harassment” that depend more on emotion than reason, but this is only playing kick the can, because why would the Obama DOE want to redefine sexual harassment in the first place? 

So I propose a slightly different origin for the current hysteria: 

If you incentivise lying, you get more lying. If you incentivise social signaling, you get more social signaling. The next thing you know, you get a social signaling spiral.

So people start lying because it gets them status points, but people are kind of bad at lying. Lying is cognitively taxing. The simplest way to make lying less taxing is to believe your own lies.

So the more people get involved in signaling spirals, the more they come to believe their own lies.

Meanwhile, everyone around them is engaged in the same signaling spiral, too. 

People get their view of “Reality” in part by checking it against what everyone else believes. If everyone in your village says the stream is to the east, even if you’ve gotten turned around and feel like it’s to the west, you’ll probably just follow everyone else and hope you get to water. If everyone around you is lying, there’s a good chance you’ll start to believe their lies.

(Let’s face it, most people are not that bright. Maybe a little bright. Not a lot. So they go along with society. Society says eat this, don’t eat that–they trust. Society is usually right about things like that, and the ones that aren’t die out. 

Trust is key. If you trust that someone has your back, you listen to them. You take advice from them. You might even try to make them proud. If you don’t trust someone, even if they’re right, you won’t listen to them. If you don’t trust them, you assume they want you dead and are trying to trick you. 

Since our system is now full of liars, trust is suffering.)

Eventually there’s just one sane person left in the room, wondering who’s gone insane: them, or everyone else.

In the case of the “mental health breakdown” on the left, it’s a combination of the left lying about its mental health and believing its own lies about things that are bothering it.

But what incentivised lying in the first place? 

Sailer dates the emergence of the insanity to 2012-13, but I remember the emergence of the current SJW-orthodoxy and its rabid consumption of what had formerly known as “liberalism” back in the Bush years, back around 2003. I was surprised at the time by the speed with which it went mainstream, spreading from “this thing my friends are arguing about” to “everyone on the internet knows this.” 


It’s Facebook. 

Zuckerberg launched “TheFacebook”, featuring photos of Harvard students, in 2004. From there it spread to other prestigious schools, and opened fully to the public in 2006. Because of its real name policy, FB has always incentivized people toward holiness spirals, and it began with an infusion of people who already believed the SJW memeplex that was hot at Harvard in 2004. 

At this point, it’s not necessarily Facebook itself that’s spreading things, and it was never just facebook. There are plenty of other social media sites, like MySpace, Reddit, and Twitter, that have also spread ideas. 

The lethality of disease is partially dependent on how difficult it is to spread. If a disease needs you to walk several miles to carry it to its next host, then it can’t go killing you before you get there. By contrast, if the disease only needs you to explode on the spot, it doesn’t need to keep you alive long enough to get anywhere. Where population are dense, sanitation is non-existent, and fleas are rampant, you get frequent plague outbreaks because disease has a trivial time jumping from person to person. Where populations are low and spread out, with good sanitation and few vermin, disease has a much harder time spreading and will tend to evolve to coexist with humans for at least as long as it takes to find a new host. 

For example, chicken pox has been infecting humans for so long that it is adapted to our ancestral tribal size (which is pretty small,) so it has developed the ability to go dormant for 20 or 40 years until a whole new generation of uninfected people is born. 

AIDS kills people, but because its method of transmission (mostly sex) is not as easy as jumping fleas or contaminated water, it takes a long time. People who’ve caught bubonic plague generally die within a week or so; untreated AIDS patients last an average of 11 years. 

The internet has allowed memes that used to stay put in colleges to spread like wildfire to the rest of the population. (Similarly, talk radio allowed conservative memes to spread back in the 80s and 90s, and the adoption of the printing press in Europe probably triggered the witch hunts and Protestantism.) 

Anyway, this whole SJW-system got perfected on social media, and strangely, much of it is dependent on this performative mental illness. Eg, in “Don’t call people with uteruses ‘women’ because that’s triggering to trans people,” the mental illness claim is that the word “women” is “triggering” to someone and therefore ought to be avoided. The word “triggered” means “to trigger a panic attack,” as in someone with PTSD.

The use of “triggered” in most of these cases is absolutely false, but people claim it because it gets them their way. 

And if people are lying a bunch about having mental illness, and surrounded by nasty, toxic people who are also lying about mental illness, and if lying is cognitively taxing, then the end result is a lot of stressed out people with mental issues. 

The architecture of communication

I was thinking today about the Bay of Pigs fiasco–a classic example of “groupthink” in action. A bunch of guys supposedly hired for their ability to think through complex international scenarios somehow managed to use the biggest military in the world to plan an invasion that was overwhelmingly crushed within 3 days.

In retrospect,the Bay of Pigs Invasion looks like an idiotic idea; everyone should have realized this was going to be a colossal disaster. But ahead of time, everyone involved thought it was a great idea that would totally succeed, including a bunch of people who got killed during the invasion.

Groupthink happens when everyone starts advocating for the same bad ideas, either because they actually think they’re good ideas, or because individuals are afraid to speak up and say anything counter to the perceived group consensus.

There’s not much you can do abut dumb, but consensus we can fight.

The obvious first strategy is to just try to get people to think differently.

Some people are really agreeable people who are naturally inclined to go along with others and seek group harmony. These folks make great employees, but should not be the entirety of a decision-making body because they are terrible at rejecting bad ideas.

By contrast, some people are assholes who like to be difficult. While you might not want to look specifically for “assholes,” the inclusion of at least one person with a reputation for saying unpopular things in any decision-making group will ensure that after a disaster, there will be at least one person there to say, “I told you so.”

You may also be able to strategically use people with different backgrounds/training/experiences/etc. Different schools of thought often come up with vastly different explanations for how the world works, and so can bring different ideas to the table. By contrast, a group that is all communists or all libertarians is likely to miss something that is obvious to folks from other groups; a group that’s all ivy league grads or all farmers is likewise prone to miss something.

Be careful when mixing up your group; getting useless people into the mix just because they have some superficial trait that makes them seem different from others will not help. You want someone who actually brings a different and valuable perspective or ideas to the group, not someone who satisfies someone else’s political agenda about employment.

Alternatively, if you can’t change the group’s membership, you may be able to break down consensus by isolating people so that they can’t figure out what the others think. Structure the group so that  members can’t talk to each other in real life, or if that’s too inefficient, make everyone submit written reports on what they’re thinking before the talking begins. Having one member or part of the group that is geographically isolated–say, having some of your guys located in NY and some in Chicago–could also work.

But probably the easiest way to remove the urge to go along with bad ideas for the sake of group harmony is to remove the negative social repercussions for being an asshole.

Luckily for us, the technology for this already exists: anonymous internet messageboards. When people communicate anonymously, they are much more likely to say unpopular, assholish things like “your invasion plan is fucking delusional,” than they are in public, where they have to worry about being un-invited to Washington cocktail parties.

Anonymous is important. In fact, for truly good decision-making, you may have to go beyond anonymous to truly a-reputational posting with no recognizable names or handles.

The internet helpfully supplies us with communities with different degrees of anonymity. On Facebook, everyone uses their real names; Twitter has a mix of real names and anonymous handles where people still build up reputations; Reddit and blogs are pretty much all pseudonyms; 8Chan is totally anonymous with no names and no ability to build up a reputation.

Facebook is full of pretty messages about how you should Be Yourself! and support the latest good-feel political cause. Since approximately everyone on FB have friended their boss, grandmother, dentist, and everyone else they’ve ever met, (and even if you don’t, employers often look up prospective applicants’ FB profiles to see what they’ve been up to,) you can only post things on FB that won’t get you in trouble with your boss, grandmother, and pretty much everyone else you know.

I have often felt that Facebook is rather like a giant meta-brain, with every person an individual neuron sending messages out to everyone around them, and that so long as all of the neurons are messaging harmony, the brain is happy. But when one person is out of sync, sending out messages that don’t mesh with everyone else’s, it’s like having a rogue brain cell that’s firing at it’s own frequency, whenever it wants to, with the result that the meta-brain wants to purge the rogue cell before it causes epilepsy.

I know my position in the Facebook meta-brain, and it’s not a good one.

The Facebook architecture leads quickly to either saying nothing that could possibly offend anyone, (thus avoiding all forms of decision-making all together,) or competitive morality spirals in which everyone tries to prove that they are the best and most moral person in the room. (Which, again, does not necessarily lead to the best decision-making; being the guy who is most anti-communist does not mean you are the guy with the best ideas for how to overthrow Castro, for example.)

Twitter and bloggers are anonymous enough to say disagreeable things without worrying about it having major effects on their personal lives; they don’t have to worry about grandma bringing up their blog posts at Thanksgiving dinner, for example, unless they’ve told grandma about their blog. However, they may still worry about alienating their regular readers. For example, if you run a regular Feminist blog, but happen to think the latest scandal is nonsense and the person involved is making it all up, it’s probably not worth your while to bring it up and alienate your regular readers. If you’re the Wesleyan student paper, it’s a bad idea to run any articles even vaguely critical of #BlackLivesMatters.

Places like 8Chan run completely anonymously, eschewing even reputation. You have no idea if the person you’re talking to is the same guy you were talking to the other day, or even just a few seconds ago. Here, there are basically no negative repercussions for saying you think Ike’s invasion plan is made of horse feces. If you want to know all the reasons why your idea is dumb and you shouldn’t do it, 8Chan is probably the place to ask.

Any serious decision-making organization can set up its own totally anonymous internal messageboard that only members have access to, and then tell everyone that they’re not allowed to take real-life credit for things said on the board. Because once you’ve got anonymous, a-reputational, a-name communication, your next goal is to keep it that way. You have to make it clear–in the rules themselves–to everyone involved that the entire point of the anonymous, a-name messageboard is to prevent people from knowing whose ideas are whose, so that people can say things that disrupt group harmony without fear. Second, you have to make sure that no one tries to bypass the no reputations by just telling everyone who they are. You can do this by having strong rules against it, and/or by frequently claiming to be everyone else and stating in the rules that it is perfectly a-ok to pretend to be everyone else, precisely for this reason.

Obviously moderation is a fast route to groupthink; your small decision-making messageboard really shouldn’t need any moderation beyond the obvious, “don’t do anything illegal.”

The biggest reason you have to make this extra clear from the get-go is that anonymous, a-reputational boards seem to be anathema to certain personality types–just compare the number of women on Facebook with the number of women on Reddit–so any women in your decision-making group may strongly protest the system. (So may many of the men, for that matter.)

Personal experience suggests that men and women use communication for different ends. Men want to convey facts quickly and get everything done; women want to build relationships. Using real names and establishing reputations lets them do that. Being anonymous and anomized and unable to carry on a coherent conversation with another person from minute to minute makes this virtually impossible. Unfortunately, building relationships gets you right back to where we started, with people neglecting to say uncomfortable things in the interests of maintaining group harmony.

This basic architecture of communication also seems to have an actual effect on the kinds of moral and political memes that get expressed and thrive in each respective environment. As I mentioned above, Facebook slants strongly to the left; when people’s reputations are on the line, they want to look like good people, and they make themselves look like good people by professing deep concern for the welfare of others. As a result, reading Facebook is like jumping onto Cthulhu’s back as he flies through the Overton Window. By contrast, if you’ve ever wandered into /pol/, you know that 8Chan slants far to the right. If you’re going to post about your love of Nazis, you don’t want to do it in front of your boss, your grandma, and all of your friends and acquaintances. When you want to be self-interested, 8Chan is the place to go; without reputation, there’s no incentive to engage in holiness spirals.

I don’t know what such a system do to corporate or political decision making of the Bay of Pigs variety, but if your want your organization to look out for its own interests (this is generally accepted as the entire point of an organization,) then it may be useful to avoid pressures that encourage your organization to look out for other people’s interests at the expense of its own.


My Facebook Feed is all full of articles/comments/posts about how the Waco Shootout shows just how differently the media deals with white gang violence than black gang violence.

Meanwhile, my Facebook Feed has zero articles/comments/posts about black gang violence.

So, I guess we deal with white gang violence by focusing on it, and black gang violence by ignoring it as much as possible.

In the interest of fairness, I will note that Asian gangs also exist, though Asians have a much lower overall crime rate than whites.