Facebook’s political ad policy has been recently clarified, and its connection to rising Islamophobic violence is very important. Facebook retains an anti-censorship position in relationship to political ads while claiming to ban hate speech, but this remains an unclear distinction with many loopholes.

CEO Mark Zuckerberg presents his ad policy as securing free expression and allowing users to make decisions. This denies Facebook’s centrality as a U.S. and global news source. It also ignores how remote communications technologies uniquely foment hatred and need to be approached with that awareness.

Fake news and hate speech are the concerns here, with any number of special interest groups running ads that circulating in news feeds, frequently consumed as fact. This upcoming presidential election faces the possibility of $3 billion spent on digital ads, which is more than politicians will spend on ads themselves.

Twitter’s decision to ban political ads and Google’s decision to distinguish between political and issue ads while limiting microtargeting indicate that social media platforms have options. It’s hard to imagine Facebook’s controversial decision isn’t motivated by profit, as there is so much money to be made selling political ads, especially in a highly anticipated presidential election like the upcoming U.S. one.

Data-mining lies at the heart of Facebook’s political ad infrastructure, as microtargeting approaches base ad circulation on user data algorithms set to very specific groups. If Facebook decided to limit this microtargeting, there would be less of a data demand, cutting down temptations to steal data, as we have seen in the Cambridge Analytica case and a recent Facebook developer employee breach.

To be fair, it’s too early to know how social media impacts users’ election/political decisions overall. The past few years have increased public awareness of fake ads and news. Average Facebook users are much more likely to scrutinize the ads that flow through their highly specialized news feeds now, and more users know they have a microtargeted profile.

Censorship patronizes users’ sense of independent decision-making capacities. One hallmark of American individualism is people’s highly private relationship to voting. Many U.S. residents would find it insulting, given this staunch notion of voter independence, that fake ads can sway an election. It is still unclear what role Facebook ads, or any representations of issues, play in elections.

What’s clearer is how politically targeted social media hate speech, in particular, has impacted real-world violence. Here, Facebook links to Myanmar’s Rohingya massacres cannot be ignored. The Guardian reports that the far-right internationally uses Facebook to organize real-world political power, and the company profits from this speech.

While the far-left and far-right can both engage in “hate speech” per se, the Trump era climate favors the right — and this hatred is monetized by Facebook’s anti-censorship ad policy. This is the case, even as Zuckerberg feigns concern for hate speech impacts, and purports a willingness to censor those expressions.

How does the company define hate speech?

Overt hate speech, advanced almost solely by far-right racist/Islamophobic groups, plays a unique role here. While social media is lauded for increasing communication, its remoteness creates an original dilemma.

More people are brought together in communication, but the remote style of internet communication can encourage hateful expressions. When coupled with other factors, this can lead to social violence in places like Myanmar and China — and the U.S.

Zuckerberg’s own commitment to free speech denies how remote communication facilitates hate group actions, and experiments in crowd governance and control. The monetization of expression is not the issue here; it’s the monetization of hate speech, via permitted forms of online expression.

Zuckerberg’s position denies the Trump-era climate of permissive hatred and leads us down a path far from free speech where real-world violence limits targeted groups’ safety and autonomy: a problem if you are a member of a targeted online group.

Even U.S. voters who value individual political autonomy and don’t require patronizing corporate censorship should be able to understand that remotely communicated hate speech can cause violence. Zuckerberg acknowledges this as much, claiming the company removed 7 million instances of hate speech in 2019’s third quarter alone. Forty languages represented in the company’s speech algorithm isn’t sufficiently monitoring a social media world with more than 7,000 global languages.

That’s one problem with the company’s fledgling hate speech position. Another asks how the line between political ad and hate speech will be drawn this election season. Twitter answered this by eliminating political ads, and therefore, central moral responsibility for unsavory electoral outcomes or violence.

Many Facebook users, including employees, want political ads controlled in such a heated election season — especially because microtargeting requires entrenching on data privacy as a rule.