LGBTQ Content Filtering: Week 1 Update

Over the course of my project I hope to look at several social media sites, but I decided to start by looking at LGBTQ content on Facebook. So far, I’ve identified two main areas of concern: ads that get taken down for having queer themes and posts being taken down for having queer reclamations of hate speech.

For ads, I have been looking at the Facebook Ad Library. It maintains a searchable collection of every ad posted to Facebook, whether currently active or not. I’ve found that many LGBTQ ads posted get marked as being “related to issues of politics or national importance,” only by virtue of the fact they are related/aimed towards the LGBTQ population. Ads under this category are required to have a disclaimer of who paid for the ad, as part of Facebook’s move towards transparency in advertising for elections. I’ve found several ads selling products aimed towards the LGBTQ community (usually just being rainbow) that were taken down for not conforming to this rule. The following link is an example: file:///Users/chasejones/Zotero/storage/IU9IMJD5/library.html

The second main issue I’ve looked at was how queer users’ posts can often get taken down for using words like “fag”/”faggot” or “dyke,” words both used as slurs against queer people and reclaimed by queer people. For example, one user’s post was taken down for mentioning a Supreme Court ruling about “Dykes on Bikes,” a lesbian motorcycle group. Here is the post for reference: https://www.facebook.com/photo.phpfbid=10207538918029446&set=a.1058509241349.9142.1784433308&type=1&theater Facebook uses both artificial intelligence (AI) and human moderators when reviewing posts, although text based posts usually are marked by the algorithms. Despite many recent advancements in Natural Language Processing (NLP), the field combining computer science and linguistics, differentiating between words used as hate speech and reclaimed by oppressed groups is still very difficult.

Next, I plan on diving deeper into the technical aspects of this process, although Facebook is still somewhat reserved in the specifics of how they moderate content.  I will look more in depth at the various open source programs Facebook lists as using on its platform, as well as reading some academic papers on current research in hate speech detection in NLP.

Comments

  1. klsheridan says:

    Hi Chase! This sounds like a really interesting project, and I’m looking forward to reading more about your work. I’m not super familiar with how Facebook’s ad moderation works – do you know if the ads that were taken down for being “political” were flagged by average users and then filtered by moderators, or do they get automatically taken down if they are flagged by random people?

  2. iawilliams says:

    This is really interesting and important work – I had no idea that LGBTQ-related advertisements were always classified as ‘political’ by Facebook. Are you planning to propose some ways that Facebook and other social media sites can improve their algorithms? Do they use any algorithms that are more complex than simply flagging violating words or phrases and taking them down? I thought it was interesting that you pointed out that oppressed groups can reclaim hate speech. I would love to see more exploration of how and why that happens, and how the linguistic context of reclaimed hate speech differs from the context of true hate speech, if you have time to do so in your project.
    I know that many social media sites also have human moderators – do you propose a sensitivity or cultural awareness training for these human moderators on LGBTQ issues? Looking forward to seeing what you find!

  3. cdjones03 says:

    Hi there, thank you for your comment. So Facebook requires people that plan to post ads related to “social issues, elections or politics” (it was just changed this month from “issues of politics or national importance” to “social issues, elections, or politics”) to go through an authorization process to run those kinds of ads. Facebook says they use both content reviewers and AI to review posted ads (the specifics are not defined). So, an ad may be rejected or taken down after being posted if 1. they were not authorized for this kind of ad, 2. it goes against Community Standards, or 3. they do not have the proper disclaimer for who paid for the ad. If a random person reports an ad for something, it stays up until a human moderator personally reviews the ad and makes a decision as to ignore their report or take the ad down and contact the advertiser. Here is a link to Facebook’s policy and how they defined “social issues, elections, or politics” if you’re interested: https://www.facebook.com/business/help/313752069181919?helpref=page_content

  4. cdjones03 says:

    Hi there Irene, thanks for commenting. The goal of my project is definitely to have some recommendations for improvements to Facebook’s algorithms, but they are very secretive of the specifics and actual code of their algorithms. Additionally, a lot of problems come from Facebook’s core ideology itself. So, I think my project will have broader recommendations for change and calls for increased research in certain areas that will aid Facebook’s AI. Looking at advertisements specifically, they say they look at “images, text, targeting, and positioning” to see if they go against Facebook advertising policies. I plan on giving a brief history of specifically queer reclamation of hate speech in my final research paper, and current research in how algorithms understand and deal with it. With regards to human moderators, I have not seen Facebook make any statement about training for LGBTQ issues specifically, so I definitely agree that this could help reduce the number of posts wrongfully taken down.