AI not working as intended

ft_19.07.11_socialmediacompanies_younger-republican-men-less-likely-see-need-remove-offensive-content_2
source

As we have previously covered, there has been a push in recent times for social media companies to censor posts containing offensive remarks, Fake News, Russian bots, or slowed down videos of Nancy Pelosi. Social media companies have been dumb enough to along with this demand (mostly because the people who run them are also the kinds of people who think social media companies should censor more,) but as a practical matter, they are of course incapable of reading and evaluating every post that comes their way.

So these companies need algorithms to trawl their millions of posts for offensive content.

Unfortunately (for someone) any AI trained on things that people actually find offensive will censor things the Woke don’t want censored:

On LGBTQ pride month, we share some of the main findings of our research looking at the impacts of Artificial Intelligence on LGBTQ speech. Our goal is to shed some light on the gaps and biases that may be present in AI technologies that are currently being developed to moderate content on internet platforms and demonstrate how they might have significant implications for LGBTQ rights.

The results of this study are unintentionally hilarious:

By training their algorithm to learn what pieces of content are more likely to be considered as “toxic”, Perspective may be a useful tool to make automated decisions about what should stay and what should be taken down from the internet platforms. …

We used Perspective’s API to measure the perceived levels of toxicity of prominent drag queens in the United States and compared them with the perceived levels of toxicity of other prominent Twitter users in the US, especially far-right figures. …

After getting access to Twitter’s API, we collected tweets of all former participants of RuPaul’s Drag Race (seasons 1 to 10) who have verified accounts on Twitter and who post in English, amounting to 80 drag queen Twitter profiles.

We used Perspective’s production version 6 dealing with “toxicity”. We only used content posted in English, so tweets in other languages were excluded. We also collected tweets of prominent non-LGBTQ people (Michelle Obama, Donald Trump, David DukeRichard SpencerStefan Molyneux and Faith Goldy). These Twitter accounts were chosen as control examples for less controversial or “healthy” speech (Michele Obama) and for extremely controversial or “very toxic” speech (Donald Trump, David DukeRichard SpencerStefan Molyneux and Faith Goldy). In total, we collected 116,988 tweets and analysed 114,204 (after exclusions).

grafico_com_drags
“drag queens are in blue; white supremacists in red; Michelle Obama in green and Donald Trump, in orange.”

It turns out that drag queens are really rude and Richard Spencer is pretty polite. Even Donald Trump, who is kind of rude sometimes, is more polite than the majority of drag queens.

Of course, anyone who has actually read their tweets would have known this already. As the study notes, drag queens are likely to pepper their tweets with phrases like “I love you bitch” and “I’m a faggot,” while white supremecists are likely to say things like “Italy has a lovely history and culture and we should preserve it.” Drag queens also know their speech is protected (back in the early days of systems like AOL Online, you weren’t allowed to use words like “bitch” and “fuck,” but today they are allowed,) while WNs know that they are being watched and that if they step out of line, there’s a good chance their accounts will be deleted.

Any algorithm trained on actual rudeness as perceived by normal humans will of course tag drag queens’ typical tweets as offensive; it will take intervention by actual humans to train the algorithms to pick up on the words creators actually want to censor, like “Donald Trump” or “Chinese detention camps”.

 Speaking of which:

Van de Weghe has continued to study Chinese AI—how it tracks people with ever-improving facial recognition software. He describes the new “social credit” programs that use AI to combine data from numerous sources, assign scores to people’s behavior and allocate privileges accordingly. In 2013, when Liu Hu, a Chinese journalist, exposed a government official’s corruption, he lost his social credit and could no longer buy plane tickets or property, take out loans, or travel on certain train lines. …

Jennifer Pan, an assistant professor of communication, explains why Chinese citizens accept social credit programs. “People think others spit in the street or don’t take care of shared, public facilities. They imagine that social credit could lead to a better, more modern China. This is an appealing idea. Political dissent is already so highly suppressed and marginalized that the addition of AI is unlikely to have anything more than an incremental effect.”

The result for journalists is that actual prisons (where many are currently held) are replaced by virtual prisons—less visible and therefore more difficult to report on. In the face of this, Van de Weghe says, many journalists he knows have quit or self-censored. And while reporters outside China can critique the general practice of censorship, thousands of individual cases go unnoticed. Government computers scan the internet for all types of dissidence, from unauthorized journalism to pro-democracy writing to photos of Winnie-the-Pooh posted by citizens to critique President Xi Jinping, who is thought to bear a resemblance. AI news anchors—simulations that resemble humans on-screen—deliver news 24/7. The government calls this media control “harmonization.” The Communist Party’s goal for sustaining its rule, according to Pan, “is to indoctrinate people to agree. Authoritarian regimes don’t want fear.”

There is a lot of amazing technological progress coming out of China these days, for good or bad:

If you think mass AI-censorship and surveillance sounds scary in China but totally good and awesome in the US, you haven’t thought this through.

Logan Paul and the Algorithms of Outrage

Leaving aside the issues of “Did Logan Paul actually do anything wrong?” and “Is changing YouTube’s policies actually in Game Theorist’s interests?” Game Theorist makes a good point: while YouTube might want to say, for PR reasons, that it is doing something about big, bad, controversial videos like Logan Paul’s, it also makes money off those same videos. YouTube–like many other parts of the internet–is primarily click driven. (Few of us are paying money for programs on YouTube Red.) YouTube wants views, and controversy drives views.

That doesn’t mean YouTube wants just any content–a reputation for having a bunch of pornography would probably have a damaging effect on channels aimed at small children, as their parents would click elsewhere. But aside from the actual corpse, Logan’s video wasn’t the sort of thing that would drive away small viewers–they’d get bored of the boring non-cartoons talking to the camera long before the suicide even came up.

Logan Paul actually managed to hit a very sweet spot: controversial enough to draw in visitors (tons of them) but not so controversial that he’d drive away other visitors.

In case you’ve forgotten the controversy in a fog of other controversies, LP’s video about accidentally finding a suicide in the Suicide Forest was initially well-received, racking up thousands of likes and views before someone got offended and started up the outrage machine. Once the outrage machine got going, public sentiment turned on a dime and LP was suddenly the subject of a full two or three days of Twitter hate. The hate, of course, got YouTube more views. LP took down the video and posted an apology–which generated more attention. Major media outlets were now covering the story. Even Tablet managed to quickly come up with an article: Want a New Years Resolution? Don’t be Like Logan Paul.

And it worked. I passed up Tablet’s regular article on Trump and Bagels and Culture, but I clicked on that article about Logan Paul because I wanted to know what on earth Tablet had to say about LP, a YouTuber whom, 24 hours prior, I had never heard of.

And the more respectable (or at least highly-trafficked) news outlets picked up the story, the higher Logan’s videos rose on the YouTube charts. And as more people watched more of LP’s other videos, they found more things to be offended at. For example, once he ran through the streets of Japan holding a fish. A FISH, I tell you. He waved this fish at people and was generally very annoying.

I don’t like LP’s style of humor, but I’m not getting worked up over a guy waving a fish around.

So understand this: you are in an outrage machine. The purpose of the outrage machine is to drive traffic, which makes clicks, which result in ad revenue. There are probably whole websites (Huffpo, CNN) that derive a significant percent of their profits from hate-clicks–that is, intentionally posting incendiary garbage not because they believe it or think it is just or true or appeals to their base, but because they can get people to click on it in sheer shock or outrage.

Your emotions–your “emotional labor” as the SJWs call it–is being turned into someone else’s dollars.

And the result is a country that is increasingly polarized. Increasingly outraged. Increasingly exhausted.

Step back for a moment. Take a deep breath. Get some fresh air. Ask yourself, “Does this really matter? Am I actually helping anyone? Will I remember this in a week?”

I’d blame the SJWs for the outrage machine–and really, they are good running it–but I think it started with CNN and “24 hour news.” You have to do something to fill that time. Then came Fox News, which was like CNN, but more controversial in order to lure viewers away from the more established channel. Now we have the interplay of Facebook, Twitter, HuffPo, online newspapers, YouTube, etc–driven largely by automated algorithms designed to maximized clicks–even hate clicks.

The Logan Paul controversy is just one example out of thousands, but let’s take a moment and think about whether it really mattered. Some guy whose job description is “makes videos of his life and posts them on YouTube” was already shooting a video about his camping trip when he happened upon a dead body. He filmed the body, called the police, canceled his camping trip, downed a few cups of sake while talking about how shaken he was, and ended the video with a plea that people seek help and not commit suicide.

In between these events was laughter–I interpret it as nervous laughter in an obviously distressed person. Other people interpret this as mocking. Even if you think LP was mocking the deceased, I think you should be more concerned that Japan has a “Suicide Forest” in the first place.

Let’s look at a similar case: When three year old Alan Kurdi drowned, the photograph of his dead body appeared on websites and newspapers around the world–earning thousands of dollars for the photographers and news agencies. Politicans then used little Alan’s death to push particular political agendas–Hillary Clinton even talked about Alan Kurdi’s death in one of the 2016 election debates. Alan Kurdi’s death was extremely profitable for everyone making money off the photograph, but no one got offended over this.

Why is it acceptable for photographers and media agencies to make money off a three year old boy who drowned because his father was a negligent fuck who didn’t put a life vest on him*, but not acceptable for Logan Paul to make money off a guy who chose to kill himself and then leave his body hanging in public where any random person could find it?

Elian Gonzalez, sobbing, torn at gunpoint from his relatives. BTW, This photo won the 2001 Pulitzer Prize for Breaking News.

Let’s take a more explicitly political case. Remember when Bill Clinton and Janet Reno sent 130 heavily armed INS agents to the home of child refugee Elian Gonzalez’s relatives** so they could kick him out of the US and send him back to Cuba?

Now Imagine Donald Trump sending SWAT teams after sobbing children. How would people react?

The outrage machine functions because people think it is good. It convinces people that it is casting light on terrible problems that need correcting. People are getting offended at things that they wouldn’t have if the outrage machine hadn’t told them to. You think you are serving justice. In reality, you are mad at a man for filming a dead guy and running around Japan with a fish. Jackass did worse, and it was on MTV for two years. Game Theorist wants more consequences for people like Logan Paul, but he doesn’t realize that anyone can get offended at just about anything. His videos have graphic descriptions of small children being murdered (in videogame contexts, like Five Nights at Freddy’s or “What would happen if the babies in Mario Cart were involved in real car crashes at racing speeds?”) I don’t find this “family friendly.” Sometimes I (*gasp*) turn off his videos as a result. Does that mean I want a Twitter mob to come destroy his livelihood? No. It means a Twitter mob could destroy his livelihood.

For that matter, as Game Theorist himself notes, the algorithm itself rewards and amplifies outrage–meaning that people are incentivised to create completely false outrage against innocent people. Punishing one group of people more because the algorithm encourages bad behavior in other people is cruel and does not solve the problem. Changing the algorithm would solve the problem, but the algorithm is what makes YouTube money.

In reality, the outrage machine is pulling the country apart–and I don’t know about you, but I live here. My stuff is here; my loved ones are here.

The outrage machine must stop.

*I remember once riding in an airplane with my father. As the flight crew explained that in the case of a sudden loss of cabin pressure, you should secure your own mask before assisting your neighbors, his response was a very vocal “Hell no, I’m saving my kid first.” Maybe not the best idea, but the sentiment is sound.

**When the boat Elian Gonzalez and his family were riding in capsized, his mother and her boyfriend put him in an inner tube, saving his life even though they drowned.

Noah’s Twitter Deluge

To be alive today is to drown in data…

Noah's Ark by Edward Hicks, 1846
Noah’s Ark by Edward Hicks, 1846

Now the earth was corrupt in GNON’s sight and was full of violence. So GNON said to Noah, “I am going to put an end to all people, for the earth is filled with violence because of them. So make yourself an ark of cypress wood; make rooms in it and coat it with pitch inside and out. Make a roof for it, leaving below the roof an opening one cubit high all around. Put a door in the side of the ark and make lower, middle and upper decks, and make it immune to Twitter, Facebook, and cable TV.  I am going to bring a deluge of information, unending news, tweets, and endless status updates on the earth to distract all life under the heavens, every creature that has the breath of life in it, until they fade from existence.

…modernity is selecting for those who resist modernity.