Is The Tech Industry Fighting White Supremacy Or Fueling It?

by
The tech industry is playing a very complacent role in the fight against white supremacy and neo-Nazism as hate groups continue to thrive on social media.

Two outstretched white arms giving the the fascist salute during a gathering in West Allis, Wisconsin

There is a cesspool where hate and white supremacy live and thrive, and it’s called social media.

Several online spaces are being used as recruitment tools for hate groups and platforms to viciously attack others behind the safety of a digital screen.

Following the tragic and disturbing events that took place in Charlottesville, Virginia, in August, tech leaders spoke out against hate speech being used on its forums.

Facebook founder Mark Zuckerberg proclaimed that there is “no place for hate in our community,” while Snapchat announced that hate speech “will never be tolerated,” and YouTube declared that new tools will soon be introduced to help combat the spread of hate online, Tech Crunch reports.

Additionally, several tech companies cut ties with President Donald Trump’s business councils in protest against his assertion that there were “very fine people” among the group of violent neo-Nazis that descended upon Charlottesville.

Even Airbnb received praise for shutting down the accounts of users who were looking for places to stay while attending hate rallies.

However, despite all of this, neither the companies’ policies nor algorithms have served as an effective deterrent for trolls who use these sites for hateful purposes.

Presumably in an effort to maintain their user base, tech leaders have failed to take aggressive action against the divisive and offensive content on their sites and have fallen into complacency.

There are virtually no consequences for the hate-mongers who violate a given site’s rules. For example, Twitter is known for suspending accounts that violate their terms of service; however, it’s often a delayed response after the material has already gone viral and been captured with screen grabs by other users.

There should be an instant red flag for offensive posts based on various keywords or some other similar strategy that catches these posts before they’re shared with the masses.

Social media sites are constantly introducing new features on their platforms, proving how far technology has come, yet they can’t seem to find a viable solution to curb the spread of white supremacy and hate?

Furthermore, these sites seem to be unable to distinguish positive posts from offensive material. Recently, Facebook blocked an ad for a march against white supremacy, which, needless to say, sends a message that contradicts Zuckerberg's aforementioned statements.

The group "Portland Stands United Against Hate" bought an ad to promote their counter-demonstration against a local white supremacist rally. However, two hours after the ad was approved and went live, Facebook kicked it back for supposedly violating the site’s hate speech policies.

“We don't allow ads that use profanity, or refer to the viewer's attributes (ex: race, ethnicity, age, sexual orientation, name) or harass viewers,” Facebook’s denial letter read, according to Slate.

However, according to Jamie Partridge — one of the lead organizers of Portland Stands United Against Hate — there were no references to any racial category in the ad that could be considered offensive except for the words “white nationalism."

Apparently, Facebook bans hate speech against certain “protected” categories, such as sex, race, religion, and gender identity, among others, but the site allows potentially offensive references to subsets of those various categories.

“One document trains content reviewers on how to apply [Facebook’s] global hate speech algorithm. The slide identifies three groups: female drivers, black children and white men. It asks: Which group is protected from hate speech? The correct answer: white men,” according to a ProPublica report detailing leaked documents about Facebook’s moderation policies.

In essence, you can post hateful statements about very specific subsets of people as long as you’re not making blanketed, generalized statements against larger groups of people.

If this sounds like a flawed system, that’s because it is. 

Alas, loopholes of this nature are exactly why white supremacy and hate have a home on social media. As much as we'd like to believe that the tech industry is committed to fixing these issues, we can't help but wonder why it's taking so long.

It would be no surprise if executives of these companies are dragging their feet on improving their algorithms because they're too afraid of facing backlash or being accused of silencing non-liberal views. 

Banner/Thumbnail Photo Credit: Reuters, Robert Galbraith

Carbonated.TV
View Comments

Recommended For You