AI not working as intended

ft_19.07.11_socialmediacompanies_younger-republican-men-less-likely-see-need-remove-offensive-content_2
source

As we have previously covered, there has been a push in recent times for social media companies to censor posts containing offensive remarks, Fake News, Russian bots, or slowed down videos of Nancy Pelosi. Social media companies have been dumb enough to along with this demand (mostly because the people who run them are also the kinds of people who think social media companies should censor more,) but as a practical matter, they are of course incapable of reading and evaluating every post that comes their way.

So these companies need algorithms to trawl their millions of posts for offensive content.

Unfortunately (for someone) any AI trained on things that people actually find offensive will censor things the Woke don’t want censored:

On LGBTQ pride month, we share some of the main findings of our research looking at the impacts of Artificial Intelligence on LGBTQ speech. Our goal is to shed some light on the gaps and biases that may be present in AI technologies that are currently being developed to moderate content on internet platforms and demonstrate how they might have significant implications for LGBTQ rights.

The results of this study are unintentionally hilarious:

By training their algorithm to learn what pieces of content are more likely to be considered as “toxic”, Perspective may be a useful tool to make automated decisions about what should stay and what should be taken down from the internet platforms. …

We used Perspective’s API to measure the perceived levels of toxicity of prominent drag queens in the United States and compared them with the perceived levels of toxicity of other prominent Twitter users in the US, especially far-right figures. …

After getting access to Twitter’s API, we collected tweets of all former participants of RuPaul’s Drag Race (seasons 1 to 10) who have verified accounts on Twitter and who post in English, amounting to 80 drag queen Twitter profiles.

We used Perspective’s production version 6 dealing with “toxicity”. We only used content posted in English, so tweets in other languages were excluded. We also collected tweets of prominent non-LGBTQ people (Michelle Obama, Donald Trump, David DukeRichard SpencerStefan Molyneux and Faith Goldy). These Twitter accounts were chosen as control examples for less controversial or “healthy” speech (Michele Obama) and for extremely controversial or “very toxic” speech (Donald Trump, David DukeRichard SpencerStefan Molyneux and Faith Goldy). In total, we collected 116,988 tweets and analysed 114,204 (after exclusions).

grafico_com_drags
“drag queens are in blue; white supremacists in red; Michelle Obama in green and Donald Trump, in orange.”

It turns out that drag queens are really rude and Richard Spencer is pretty polite. Even Donald Trump, who is kind of rude sometimes, is more polite than the majority of drag queens.

Of course, anyone who has actually read their tweets would have known this already. As the study notes, drag queens are likely to pepper their tweets with phrases like “I love you bitch” and “I’m a faggot,” while white supremecists are likely to say things like “Italy has a lovely history and culture and we should preserve it.” Drag queens also know their speech is protected (back in the early days of systems like AOL Online, you weren’t allowed to use words like “bitch” and “fuck,” but today they are allowed,) while WNs know that they are being watched and that if they step out of line, there’s a good chance their accounts will be deleted.

Any algorithm trained on actual rudeness as perceived by normal humans will of course tag drag queens’ typical tweets as offensive; it will take intervention by actual humans to train the algorithms to pick up on the words creators actually want to censor, like “Donald Trump” or “Chinese detention camps”.

 Speaking of which:

Van de Weghe has continued to study Chinese AI—how it tracks people with ever-improving facial recognition software. He describes the new “social credit” programs that use AI to combine data from numerous sources, assign scores to people’s behavior and allocate privileges accordingly. In 2013, when Liu Hu, a Chinese journalist, exposed a government official’s corruption, he lost his social credit and could no longer buy plane tickets or property, take out loans, or travel on certain train lines. …

Jennifer Pan, an assistant professor of communication, explains why Chinese citizens accept social credit programs. “People think others spit in the street or don’t take care of shared, public facilities. They imagine that social credit could lead to a better, more modern China. This is an appealing idea. Political dissent is already so highly suppressed and marginalized that the addition of AI is unlikely to have anything more than an incremental effect.”

The result for journalists is that actual prisons (where many are currently held) are replaced by virtual prisons—less visible and therefore more difficult to report on. In the face of this, Van de Weghe says, many journalists he knows have quit or self-censored. And while reporters outside China can critique the general practice of censorship, thousands of individual cases go unnoticed. Government computers scan the internet for all types of dissidence, from unauthorized journalism to pro-democracy writing to photos of Winnie-the-Pooh posted by citizens to critique President Xi Jinping, who is thought to bear a resemblance. AI news anchors—simulations that resemble humans on-screen—deliver news 24/7. The government calls this media control “harmonization.” The Communist Party’s goal for sustaining its rule, according to Pan, “is to indoctrinate people to agree. Authoritarian regimes don’t want fear.”

There is a lot of amazing technological progress coming out of China these days, for good or bad:

If you think mass AI-censorship and surveillance sounds scary in China but totally good and awesome in the US, you haven’t thought this through.

Advertisements

5 thoughts on “AI not working as intended

  1. Well it’s the like it or not it is the future. And future is now.
    We must adapt to new realities of ai, robots, mass data and whatever new changes these technology brings.

    Like

  2. We used to learn that the West operated on guilt but the East operated on shame, and what is the social credit scheme but nationalized reputation? Not surprising westerners should find applied popularity revolting when our traditional moral systems insist that right is right regardless of popular opinion. It is, however, telling that the techno-overlords can’t see why their subjects would be upset about it.

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s