Algorithmic Optimization pt 2

 

There is nothing exceptional about the slowed-down Nancy Pelosi video, and nothing terribly exceptional in reporters saying uninformed things about subjects they aren’t well versed in.

The significance lies far more behind the scenes. From Marketwatch: Facebook Decided to Rethink Policies on Doctored Media two days Before Pelosi Video.

Wow, that is awfully coincidental that Facebook just happened to be thinking about changing these policies anyway right before a doctored video just happened to make it onto the news, prompting millions of people to pressure Facebook into doing exactly what Facebook already wanted to do.

Don’t be fooled: this isn’t spontaneous. Oh, sure, many of the people at the low end, like reporters, are just doing their job of reading the news they have been given into the camera, but there is plenty of active coordination going on behind the scenes by organizations like Facebook and the Democratic Party.

The Democrats realized sometime around 2016 that they have a meme problem. People on the internet thought Trump was funny and Democrats were boring sticks in the mud. People on the internet made videos about Hillary Clinton’s health, the European migration crisis, and other subjects the Dems didn’t approve of.

They don’t want this happening again.

So they are laying the groundwork now to re-write the policies and algorithms to strategically remove problematic conservative voices from the fray. Alex Jones has already been kicked off Youtube, Facebook, PayPal, etc. FB has taken a particularly hard line, threatening not just to delete Jones’s videos, but any account that posts them (excepting those that post them in order to criticize them).

Even Visa and Mastercard are getting in on the act, cutting off banking services to organizations whose political views they don’t like.

The ostensible reason for Alex Jones’s deplatforming is his supposed spread of conspiracy theories post-Sandy Hook (I say “supposedly” because I have not seen the clips in question,) but it is obvious that 1. these concerns surfaced years after Sandy Hook and 2. no one has deplatformed media outlets that pushed the “Iraq has WMDs” conspiracy theory that cost the US trillions of dollars and lead to the deaths of thousands (millions?) of people.

This has all been accompanied by a basic shift in how media platforms and infrastructure are viewed.

The traditional conception is that these are platforms, not publishers, and thus they merely provide something akin to infrastructure without much say over how you, the user, put it to use. For example, the electric company provides electricity to anyone who pays for it, and even if you use your electricity to warm the cages of your illegally gotten, exotic, endangered reptile collection, the electric company will generally keep providing you with electricity. The electric company does not have to approve of what you do with the electricity you buy, and if you break the law with their electricity, they see it as the state’s job to stop you.

A publisher and a platform, like Facebook, traditionally enjoyed different legal rights and safeguards. A publisher checks and decides to publish every single item they put out, and so is held to be responsible for anything they print. A platform merely provides a space where other people can publish their own works, without supervision. Platforms do not check posts before they go up, (as a practical matter, they can’t,) and thus are generally only held legally responsible for taking down material on their site if someone has notified them that it is in violation of some law.

EG, suppose someone posts something really illegal, child porn, on Facebook. If Facebook is a “publisher,” it is now publishing child porn and is in big legal trouble. But since Facebook is just a platform, it deletes the videos and is legally in the clear. (The poster may still go to prison, of course.)

The conceptual shift in recent years has been to portray platforms as “allowing” people to come in and use their platforms, and then ask why they are allowing such shitty people to use their platform. No one asks why the electric company allows you to use their electricity to raise your army of bio mutant squids, but they do ask why Facebook allows right-wingers to be on the platform at all.

This is treating platforms like publishers, and they are absolutely jumping into it with both feet.

Let’s skip forward a bit in the video to the lady in white to see this in action:

It’s been viewed millions of times on the internet, but it’s not real… This is really scary, and not going away, and I’m fearful this is going to be all over the 2020 election.

You know, that’s how I felt when libs kept bringing up Harry Potter in the context of the last election, but for some reason taking a children’s fantasy story about wizards is acceptable in political discourse but slowed-down videos aren’t.

And who is responsible for monitoring this stuff, taking it down? Facebook, Youtube took it down, but after how long?

Other commentator… At Facebook it’s still up because Facebook allows you to do a mock video…

The correct answer is that no one is responsible for monitoring all of Facebook and Youtube’s content, because that’s impossible to do and because Facebook isn’t your mommy. If you want Facebook to be your mom and monitor everything you consume, just stop talking and leave the adults alone.

CNN asks:

So Monika, in the wake of the 2016 election obviously Facebook has repeatedly told Congress and the American people that yo’re serious about fighting disinformation, fake news, and yet this doctored video which I think even your own fact checkers acknowledge is doctored of speaker Pelosi remains on your platform. Why?

Like the previous guy already said, because it’s not against Facebook’s TOS. Of course Anderson Cooper already knows this. He doesn’t need to get an actual Facebook representative on his show to find out that “funny reaction videos are allowed on Facebook.” And if Facebook were serious about maintaining its neutrality as a platform, not a publisher, it would not have bothered to send anyone to CNN–it would have just left matters at a blanket statement that the video does not violate the TOS.

The Facebook Lady (Monika,) then explains how Facebook uses its algorithms to demote and demonitize content the “experts” claim is false. They’re proud of this and want you to know about it.

So misinformation that doesn’t promote violence, but misinformation that portrays the third most powerful politician in the country as a drunk or as somehow impaired, that’s fine? 

Oh no, quick, someone save the third most powerful person in the country from people saying mean things about her on the internet! We can’t have those disgusting peasants being rude to their betters!

Anderson Cooper is infuriatingly moronic; does not “logically understand” why Facebook leaves up videos that don’t violate the TOS but suggests that Facebook should “get out of the news business” if it can’t do it well.

Facebook isn’t in the “news business” you moron, because Facebook is a platform, not a publisher. You’re in the news business, so you really ought to know the difference.

If you don’ know the difference between Facebook and a news organization, maybe you shouldn’t be in the news business.

That said, of course Anderson Cooper actually understands how Facebook works. This whole thing is a charade to give Facebook cover for changing its policies under the excuse of “there was public outrage, so we had to.” It’s an old scam.

So to summarize:

  1. The Dems want to change the algorithms to favor themselves, eg, Facebook decided to rethink policies on doctored media two days before Pelosi video, but don’t want to be so obvious about it the Republicans fight back
  2. Wait for a convenient excuse, like a slowed-down video, then go into overdrive to convince you that Democracy is Seriously Threatened by Slow Videos, eg, Doctored Videos show Facebook willing enables of the Russians; Doctored Pelosi video is leading tip in coming disinformation battle
  3. Convince Republican leadership to go along with it because, honestly, they’re morons: Congress investigating deepfakes after doctored Pelosi video, report says
  4. Deplatform their enemies
  5. Rinse and repeat: Vox Adpocalypse

 

One final note: even though I think there is coordinated activity at the top/behind the scenes at tech companies and the like, I don’t think the average talking head you see on TV is in on it. Conspiracies like that are too hard to pull off; rather, humans naturally cooperate and coordinate their behavior because they want to work together, signal high social status, keep their jobs, etc.

Advertisements

Algorithmic Optimization has Begun

My first reaction to this video was to yell Head Like a Hole at the screen.

… Head like a hole.
Black as your soul.
I’d rather die than give you control.
Head like a hole.
Black as your soul.
I’d rather die than give you control.
Bow down before the one you serve.
You’re going to get what you deserve.
Bow down before the one you serve.
You’re going to get what you deserve. …

With that bit of catharsis, let’s take a deeper look at the first video.

A doctored video of Pelosi that surfaced this week has been viewed millions of times and some social media giants are refusing to take it down.

By the way, the “doctoring” in this video was just slowing it down, not some scary-sounding “deep fake” like the scene where Forrest Gump met JFK. (Good luck distinguishing between “slowed down” and your average humorous “reaction” video.)

Social media sites like Youtube and Facebook have traditionally taken the view that they basically let people post whatever they want, without supervision, and then take it down if 1. they receive a complaint and 2. it is illegal or otherwise violates their terms of service. Aside from a Youtube algorithm that catches pirated music, these sites rely on users’ reports because they have no way to scan and check the contents of 100% of posts.

So before a company takes down a video, you have make a credible argument to them that the video is in some way illegal or violating their TOS. If the copyright holder claimed violation, the video would probably be instantly gone, because social media sites are legally required to take down copyright violations.

But merely remixing someone else’s video, maybe adding some music or a laugh track or a bit of your own commentary, happens all the time and is usually allowed–sans a copyright claim, “this video has been edited” does not violate Youtube or Facebook’s terms of service.

So one sentence in, and already this reporter is showing a fundamental misunderstanding of how social media companies handle content complaints. They are not “refusing to take it down;” they are “not taking it down because they have not decided that it violates one of their policies.”

… I think what’s different now is the way that this kind of content can be weaponized. …

Sure, the Malleus Maleficarum, 1487, might have contributed to thousands of innocent people being tortured and burned to death during the European witch trials; Nazi propaganda might have contributed to the Holocaust; communist propaganda might have contributed to mass famines, Holodomor, the Great Leap Forward, etc., but now this kind of content can be weaponized!

… There are now websites out there where you can ask people for ten, twenty bucks to make deep fakes for you…

Deepfakes are legitimately interesting in their own right and we do need to have a real, sit-down think about the possibility of all video and photographs becoming unreliable, but this isn’t a deepfake. This is a video slowed down with ordinary video editing software like the one I use to make videos of the kids for grandma.

They’re trying to scare you with the ominous sounding “deepfake” because “slowed down a bit” doesn’t sound like nearly so big a threat to civilization.

Facebook’s actions drew strong criticism from media watchers, … so, what should viewers expect from Facebook and other social media sites when it comes to authenticating media on their platform?

Nothing. They should expect nothing because Facebook does not “authenticate” things on its platform, nor does it have the ability to.

Anyone who thinks, “I saw it on Facebook, therefore it must be true,” should not be allowed out of the house without supervision (nor should they be allowed on Facebook).

So, the video reflects this problem that we’re going to increasingly face, which is that we can’t trust our own eyes so it’s not that easy for the average citizen to make sense of what’s true and what’s false, what gets circulated or goes viral on Facebook, so they need to defer to people with expert opinions…

I think that popping sound was me turning into a Marxist.

Seriously, though, deferring political decisions to “experts” just leads to people competing over what “experts” believe. We discussed this back in my review of Tom Nichols’s The Death of Expertise:

Nichols ends with a plea that voters respect experts (and that experts, in turn, be humble and polite to voters.) After all, modern society is too complicated for any of us to be experts on everything. If we don’t pay attention to expert advice, he warns, modern society is bound to end in ignorant goo.

The logical inconsistency is that Nichols believes in democracy at all–he thinks democracy can be saved if ignorant people vote within a range of options as defined by experts like himself, eg, “What vaccine options are best?” rather than “Should we have vaccines at all?”

The problem, then, is that whoever controls the experts (or controls which expert opinions people hear) controls the limits of policy debates. This leads to people arguing over experts, which leads right back where we are today. As long as there are politics, “expertise” will be politicized, eg:

“Experts quoted in the piece.”

And where do these experts come from? I study these things; am I an expert? Do I get to decide which Youtube videos are Fake News?

What, someone’s complaining that I demonetized all of their pro-antifa videos? Too bad. I’m the expert, now.

“Experts” have brought us many valuable things, like heart surgery and airplanes. They have also had many mistakes. They once swore that witches were a serious problem, that the Earth stood still at the center of the universe, and that chemicals in the water were causing the frogs to change sex. Wait, that last one is true. Experts once claimed that homosexuality was a mental illness; today they proclaim that transsexual children should go on hormone blockers. Experts claimed that satanic ritual abuse was definitely a real thing and that there was an international conspiracy of Satanic preschools, resulting in real people actually going to prison.

The potential for the rich, powerful, and well-connected to hire their own experts and fund studies that coincidentally show they deserve to keep making lots of money and aren’t doing anything that could harm your health or well being (like the time gas companies paid for studies claiming leaded gas was harmless, or tobacco companies paid experts to claim cigarettes didn’t cause cancer.)

This is why courts let both sides bring their own experts to a case–because there are always experts on both sides.

Back to the video:

I think the republic begins to suffer if people are getting extremely bad information and the authorities, the elites, the gatekeepers, are basically throwing up their hands and just saying, “not my problem.”

Don’t worry about that popping sound.

The mode of production of material life conditions the general process of social, political and intellectual life. –Marx,  A Contribution to the Critique of Political Economy

The ideas of the ruling class are in every epoch the ruling ideas, i.e. the class which is the ruling material force of society, is at the same time its ruling intellectual force. The class which has the means of material production at its disposal, has control at the same time over the means of mental production, so that thereby, generally speaking, the ideas of those who lack the means of mental production are subject to it. –Marx, The German Ideology

Back to the guy in the video:

In the last election, we saw how outside actors came in and tried to manipulate the American electorate, spreading misinformation and Facebook was their primary platform for spreading misinformation.

The guy who just said that the Republic begins to suffer if people are getting misinformation just spread misinformation about “outside interference” in the 2016 election, and he thinks there exists some sort of politically disinterested “experts” who can determine which videos are true or not?

So what happens when those actors, when the Russians, or some bad political actors here, try to use manipulated video that does’t just change a snippet in a clip, but invents things wholecloth?

Like the time the New York Times ran a story attacking a student from Covington highschool based on deceptively edited video footage?

Or is it okay when the New York Times, the paper of record whom millions of people trust for their news does it, but bad when Alex Jones, the guy who thinks chemicals in the water are turning the frogs gay, does it?

I don’t see how, once that firehose [of fake videos] is unleashed, we have any choice but to have some authority step in and make those distinctions about what’s real and what’s not.

It’s amazing how quickly we went from “Hooray the internet is spreading the Arab Spring” to “Oh no the internet is threatening our hold on political power; shut it down!”

For the sake of both my need to sleep and everyone’s rage levels, let’s continue this in the next post.

A Little Review of Big Data Books

I recently finished three books on “big data”– Big Data: A Revolution That Will Transform How We Live, Work, and Think, by Viktor Mayer-Schönberger and Kenneth Cukier; Everybody Lies: Big Data, New Data, and What the Internet can tell us about who we Really Are, by Seth Stephens-Davidowitz; and Big Data At Work: Dispelling the Myths, Uncovering the opportunities, by Thomas H. Davenport.

None of these books was a whiz-bang thriller, but I enjoyed them.

Big Data was a very sensible introduction. What exactly is “big data”? It’s not just bigger data sets (though it is also that.) It’s the opportunity to get all the data.

Until now, the authors point out, we have lived in a data poor world. We have had to carefully design our surveys to avoid sampling bias because we just can’t sample that many people. There’s a whole bunch of math done over in statistics to calculate how certain we can be about a particular result, or whether it could just be the result of random chance biasing our samples. I could poll 10,000 people about their jobs, and that might be a pretty good sample, but if everyone I polled happens to live within walking distance of my house, is this a very representative sample of everyone in the country? Now think about all of those studies on the mechanics of sleep done on whatever college students or homeless guys a scientist could convince to sleep in a lab for a week. How representative are they?

Today, though, we suddenly live in a data rich world. An exponentially data rich world. A world in which we no longer need to correct for bias in our sample, because we don’t have to sample. We can just get… all the data. You can go to Google and find out how many people searched for “rabbit” on Tuesday, or how many people misspelled “rabbit” in various ways.

Data is being used in new and interesting (and sometimes creepy) ways. Many things that previously weren’t even considered data are now being quantitized–like one researcher quantitizing people’s backsides to determine whether a car is being driven by its owner, or a stranger.

One application I find promising is using people’s searches for various disease symptoms to identify people who may have various diseases before they seek out a doctor. Catching cancer patients earlier could save millions of lives.

I don’t have the book in front of me anymore, so I am just going by memory, but it made a good companion to Auerswald’s The Code Economy, since the modern economy runs so much on data.

Everybody Lies was a much more lighthearted, annecdotal approach to the subject, discussing lots of different studies. Davidowitz was inspired by Freakonomics, and he wants to use Big Data to uncover hidden truths of human behavior.

The book discusses, for example, people’s pornographic searches, (as per the title, people routinely lie about how much porn they look at on the internet,) and whether people’s pornographic preferences can be used to determine what percent of people in each state are gay. It turns out that we can get a break down of porn queries by state and variety, allowing a rough estimate of the gay and straight population of each state–and it appears that what people are willing to tell pollsters about their sexuality doesn’t match what they search for online. In more conservative states, people are less likely to admit to pollsters that they are gay, but plenty of supposedly “straight” people are searching for gay porn–about the same number of people as actually admit to being gay in more liberal states.

Stephens-Davidowitz uses similar data to determine that people have been lying to pollsters (or perhaps themselves) about whom they plan to vote for. For example, Donald Trump got anomalously high votes in some areas, and Obama got anomalously low votes, compared to what people in those areas told pollsters. However, both of these areas correlated highly with areas of the country where people made a lot of racist Google searches.

Most of the studies discussed are amusing, like the discovery of the racehorse American Pharaoh. Others are quite important, like a study that found that child abuse was probably actually going up at a time when official reports said it wasn’t–the reports probably weren’t showing abuse due to a decrease in funding for investigating abuse.

At times the author steps beyond the studies and offers interpretations of why the results are the way they are that I think go beyond what the data tells, like his conclusion that parents are biased against their daughters because they are more concerned with girls being fat than with boys, or because they are more likely to Google “is my son a genius?” than “is my daughter a genius?”

I can think of a variety of alternative explanations. eg, society itself is crueler to overweight women than to overweight men, so it is reasonable, in turn, for parents to worry more about a daughter who will face cruelty than a boy who will not. Girls are more likely to be in gifted programs than boys, but perhaps this means that giftedness in girls is simply less exceptional than giftedness in boys, who are more unusual. Or perhaps male giftedness is different from female giftedness in some way that makes parents need more information on the topic.

Now, here’s an interesting study. Google can track how many people make Islamophobic searches at any particular time. Compared against Obama’s speech that tried to calm outrage after the San Bernardino attack, this data reveals that the speech was massively unsuccessful. Islamophobic searches doubled during and after the speech. Negative searches about Syrian refugees rose 60%, while searches asking how to help dropped 35%.

In fact, just about every negative search we cold think to test regarding Muslims shot up during and after Obama’s speech, and just about every positive search we could think to test declined. …

Instead of calming the angry mob, as everybody thought he was doing, the internet data tells us that Obama actually inflamed it.

However, Obama later gave another speech, on the same topic. This one was much more successful. As the author put it, this time, Obama spent little time insisting on the value of tolerance, which seems to have just made people less tolerant. Instead, “he focused overwhelmingly on provoking people’s curiosity and changing their perceptions of Muslim Americans.”

People tend to react positively toward people or things they regard as interesting, and invoking curiosity is a good way to get people interested.

The author points out that “big data” is most likely to be useful in fields where the current data is poor. In the case of American Pharaoh, for examples, people just plain weren’t getting a lot of data on racehorses before buying and selling them. It was a field based on people who “knew” horses and their pedigrees, not on people who x-rayed horses to see how big their hearts and lungs were. By contrast, hedge funds investing in the stock market are already up to their necks in data, trying to maximize every last penny. Horse racing was ripe for someone to become successful by unearthing previously unused data and making good predictions; the stock market is not.

And for those keeping track of how many people make it to the end of the book, I did. I even read the endnotes, because I do that.

Big Data At Work was very different. Rather than entertain us with the success of Google Flu or academic studies of human nature, BDAW discusses how to implement “big data” (the author admits it is a silly term) strategies at work. This is a good book if you own, run, or manage a business that could utilize data in some way. UPS, for example, uses driving data to minimize package delivery routes; even a small saving per package by optimizing routes leads to a large saving for the company as a whole, since they deliver so many packages.

The author points out that “big data” often isn’t big so much as unstructured. Photographs, call logs, Facebook posts, and Google searches may all be “data,” but you will need some way to quantitize these before you can make much use of them. For example, companies may want to gather customer feedback reports, feed them into a program that recognizes positive or negative language, and then quantitizes how many people called to report that they liked Product X vs how many called to report that they disliked it.

I think an area ripe for this kind of quantitization is medical data, which currently languishes in doctors’ files, much of it on paper, protected by patient privacy laws. But people post a good deal of information about their medical conditions online, seeking help from other people who’ve dealt with the same diseases. Currently, there are a lot of diseases (take depression) where treatment is very hit-or-miss, and doctors basically have to try a bunch of drugs in a row until they find one that works. A program that could trawl through forum posts and assemble data on patients and medical treatments that worked or failed could help doctors refine treatment for various difficult conditions–“Oh, you look like the kind of patient who would respond well to melatonin,” or “Oh, you have the characteristics that make you a good candidate for Prozac.”

The author points out that most companies will not be able to keep the massive quantities of data they are amassing. A hospital, for example, collects a great deal of data about patient’s heart rates and blood oxygen levels every day. While it might be interesting to look back at 10 years worth of patient heart rate data, hospitals can’t really afford to invest in databanks to store all of this information. Rather, what companies need is real-time or continuous data processing that analyzes current data and makes predictions/recommendations for what the company (or doctor) should do now.

For example, one of the books (I believe it was “Big Data”) discussed a study of premature babies which found, counter-intuitively, that they were most likely to have emergencies soon after a lull in which they had seemed to be doing rather well–stable heart rate, good breathing, etc. Knowing this, a hospital could have a computer monitoring all of its premature babies and automatically updating their status (“stable” “improving” “critical” “likely to have a big problem in six hours”) and notifying doctors of potential problems.

The book goes into a fair amount of detail about how to implement “big data solutions” at your office (you may have to hire someone who knows how to code and may even have to tolerate their idiosyncrasies,) which platforms are useful for data, the fact that “big data” is not all that different from standard analytics that most companies already run, etc. Once you’ve got the data pumping, actual humans may not need to be involved with it very often–for example you may have a system that automatically updates drives’ routes with traffic reports, or sprinklers that automatically turn on when the ground gets too dry.

It is easy to see how “big data” will become yet another facet of the algorithmization of work.

Overall, Big Data at Work is a good book, especially if you run a company, but not as amusing if you are just a lay reader. If you want something fun, read the first two.

Logan Paul and the Algorithms of Outrage

Leaving aside the issues of “Did Logan Paul actually do anything wrong?” and “Is changing YouTube’s policies actually in Game Theorist’s interests?” Game Theorist makes a good point: while YouTube might want to say, for PR reasons, that it is doing something about big, bad, controversial videos like Logan Paul’s, it also makes money off those same videos. YouTube–like many other parts of the internet–is primarily click driven. (Few of us are paying money for programs on YouTube Red.) YouTube wants views, and controversy drives views.

That doesn’t mean YouTube wants just any content–a reputation for having a bunch of pornography would probably have a damaging effect on channels aimed at small children, as their parents would click elsewhere. But aside from the actual corpse, Logan’s video wasn’t the sort of thing that would drive away small viewers–they’d get bored of the boring non-cartoons talking to the camera long before the suicide even came up.

Logan Paul actually managed to hit a very sweet spot: controversial enough to draw in visitors (tons of them) but not so controversial that he’d drive away other visitors.

In case you’ve forgotten the controversy in a fog of other controversies, LP’s video about accidentally finding a suicide in the Suicide Forest was initially well-received, racking up thousands of likes and views before someone got offended and started up the outrage machine. Once the outrage machine got going, public sentiment turned on a dime and LP was suddenly the subject of a full two or three days of Twitter hate. The hate, of course, got YouTube more views. LP took down the video and posted an apology–which generated more attention. Major media outlets were now covering the story. Even Tablet managed to quickly come up with an article: Want a New Years Resolution? Don’t be Like Logan Paul.

And it worked. I passed up Tablet’s regular article on Trump and Bagels and Culture, but I clicked on that article about Logan Paul because I wanted to know what on earth Tablet had to say about LP, a YouTuber whom, 24 hours prior, I had never heard of.

And the more respectable (or at least highly-trafficked) news outlets picked up the story, the higher Logan’s videos rose on the YouTube charts. And as more people watched more of LP’s other videos, they found more things to be offended at. For example, once he ran through the streets of Japan holding a fish. A FISH, I tell you. He waved this fish at people and was generally very annoying.

I don’t like LP’s style of humor, but I’m not getting worked up over a guy waving a fish around.

So understand this: you are in an outrage machine. The purpose of the outrage machine is to drive traffic, which makes clicks, which result in ad revenue. There are probably whole websites (Huffpo, CNN) that derive a significant percent of their profits from hate-clicks–that is, intentionally posting incendiary garbage not because they believe it or think it is just or true or appeals to their base, but because they can get people to click on it in sheer shock or outrage.

Your emotions–your “emotional labor” as the SJWs call it–is being turned into someone else’s dollars.

And the result is a country that is increasingly polarized. Increasingly outraged. Increasingly exhausted.

Step back for a moment. Take a deep breath. Get some fresh air. Ask yourself, “Does this really matter? Am I actually helping anyone? Will I remember this in a week?”

I’d blame the SJWs for the outrage machine–and really, they are good running it–but I think it started with CNN and “24 hour news.” You have to do something to fill that time. Then came Fox News, which was like CNN, but more controversial in order to lure viewers away from the more established channel. Now we have the interplay of Facebook, Twitter, HuffPo, online newspapers, YouTube, etc–driven largely by automated algorithms designed to maximized clicks–even hate clicks.

The Logan Paul controversy is just one example out of thousands, but let’s take a moment and think about whether it really mattered. Some guy whose job description is “makes videos of his life and posts them on YouTube” was already shooting a video about his camping trip when he happened upon a dead body. He filmed the body, called the police, canceled his camping trip, downed a few cups of sake while talking about how shaken he was, and ended the video with a plea that people seek help and not commit suicide.

In between these events was laughter–I interpret it as nervous laughter in an obviously distressed person. Other people interpret this as mocking. Even if you think LP was mocking the deceased, I think you should be more concerned that Japan has a “Suicide Forest” in the first place.

Let’s look at a similar case: When three year old Alan Kurdi drowned, the photograph of his dead body appeared on websites and newspapers around the world–earning thousands of dollars for the photographers and news agencies. Politicans then used little Alan’s death to push particular political agendas–Hillary Clinton even talked about Alan Kurdi’s death in one of the 2016 election debates. Alan Kurdi’s death was extremely profitable for everyone making money off the photograph, but no one got offended over this.

Why is it acceptable for photographers and media agencies to make money off a three year old boy who drowned because his father was a negligent fuck who didn’t put a life vest on him*, but not acceptable for Logan Paul to make money off a guy who chose to kill himself and then leave his body hanging in public where any random person could find it?

Elian Gonzalez, sobbing, torn at gunpoint from his relatives. BTW, This photo won the 2001 Pulitzer Prize for Breaking News.

Let’s take a more explicitly political case. Remember when Bill Clinton and Janet Reno sent 130 heavily armed INS agents to the home of child refugee Elian Gonzalez’s relatives** so they could kick him out of the US and send him back to Cuba?

Now Imagine Donald Trump sending SWAT teams after sobbing children. How would people react?

The outrage machine functions because people think it is good. It convinces people that it is casting light on terrible problems that need correcting. People are getting offended at things that they wouldn’t have if the outrage machine hadn’t told them to. You think you are serving justice. In reality, you are mad at a man for filming a dead guy and running around Japan with a fish. Jackass did worse, and it was on MTV for two years. Game Theorist wants more consequences for people like Logan Paul, but he doesn’t realize that anyone can get offended at just about anything. His videos have graphic descriptions of small children being murdered (in videogame contexts, like Five Nights at Freddy’s or “What would happen if the babies in Mario Cart were involved in real car crashes at racing speeds?”) I don’t find this “family friendly.” Sometimes I (*gasp*) turn off his videos as a result. Does that mean I want a Twitter mob to come destroy his livelihood? No. It means a Twitter mob could destroy his livelihood.

For that matter, as Game Theorist himself notes, the algorithm itself rewards and amplifies outrage–meaning that people are incentivised to create completely false outrage against innocent people. Punishing one group of people more because the algorithm encourages bad behavior in other people is cruel and does not solve the problem. Changing the algorithm would solve the problem, but the algorithm is what makes YouTube money.

In reality, the outrage machine is pulling the country apart–and I don’t know about you, but I live here. My stuff is here; my loved ones are here.

The outrage machine must stop.

*I remember once riding in an airplane with my father. As the flight crew explained that in the case of a sudden loss of cabin pressure, you should secure your own mask before assisting your neighbors, his response was a very vocal “Hell no, I’m saving my kid first.” Maybe not the best idea, but the sentiment is sound.

**When the boat Elian Gonzalez and his family were riding in capsized, his mother and her boyfriend put him in an inner tube, saving his life even though they drowned.