Facebook and Google need humans, not just algorithms, to filter out hate speech

Facebook and Google give advertisers the ability to target users by their specific interests. That’s what has made those companies the giants that they are. Advertisers on Facebook can target people who work for a certain company or had a particular major in college, for example, and advertisers on Google can target anyone who searches a given phrase.

But what happens when users list their field of study as “Jew hater,” or list their employer as the “Nazi Party,” or search for “black people ruin neighborhoods?”

All of those were options Facebook and Google suggested to advertisers as interests they could target in their ad campaigns, according to recent reports by ProPublica and BuzzFeed. Both companies have now removed the offensive phrases that the news outlets uncovered, and said they’ll work to ensure their ad platforms no longer offer such suggestions.

That, however, is a tall technical order. How will either company develop a system that can filter out offensive phrases? It would be impossible for humans to manually sift through and flag all of the hateful content people enter into the websites every day, and there’s no algorithm that can detect offensive language with 100% accuracy; the technology has not yet progressed to that point. The fields of machine learning and natural language processing have made leaps and bounds in recent years, but it remains incredibly difficult for a computer to recognize whether a given phrase contains hate speech.

“It’s a pretty big technical challenge to actually have machine learning and natural language processing be able to do that kind of filtering automatically,” said William Hamilton, a PhD candidate at Stanford University, who specializes in using machine learning to analyze social systems. “The difficulty in trying to know, ‘is this hate speech?’ is that we actually need to imbue our algorithms with a lot of knowledge about history, knowledge about social context, knowledge about culture.”

A programmer can tell a computer that certain words or word combinations are offensive, but there are too many possible permutations of word combinations that amount to an offensive phrase to pre-determine them all. Machine learning allows programmers to feed hundreds or thousands of offensive phrases into computers to give them a sense of what to look for, but the computers are still missing the requisite context to know for sure whether a given phrase is hateful.

“You don’t want to have people targeting ads to something like ‘Jew hater,’” Hamilton said. “But at the same time, if somebody had something in their profile like, ‘Proud Jew, haters gonna hate,’ that may be OK. Probably not hate speech, certainly. But that has the word ‘hate,’ and ‘haters,’ and the word ‘Jew.’ And, really, in order to understand one of those is hate speech and one of those isn’t, we need to be able to deal with understanding the compositionality of those sentences.”

And the technology, Hamilton said, is simply “not quite there yet.”

The solution will likely require a combination of machines and humans, where the machines flag phrases that appear to be offensive, and humans decide whether those phrases amount to hate speech, and whether the interests they represent are appropriate targets for advertisers. Humans can then feed that information back to the machines, to make the machines better at identifying offensive language.

Google already uses that kind of approach to monitor the content its customers’ ads run next to. It employs temp workers to evaluate websites that display ads served by its network, according to a recent article in Wired, and to rate the nature of their content. Most of those workers were asked to focus primarily on YouTube videos starting last March, when advertisers including Verizon and Walmart pulled their ads from the platform after learning some had been shown in videos that promoted racism and terrorism.


Source: Quartz

Related posts