Tech Giants, Free Speech and Hate: Where Do We Go from Here?

Mourners and passersby left flowers and messages of support and love at the intersection of Wilkins and Murray Avenues about a block away from where the attack on Tree of Life Synagogue took place Saturday, October 27. Photo: Justin Hayhurst/100 Days Appalachia

In the past few weeks, America has experienced the deadliest anti-Semitic terrorist attack in its history, the largest political assassination attempt recorded when pipe bombs were mailed to prominent members of the Democratic Party, and the murder of African American grandparents by an avowed white supremacist at a Kroger. Two of these events– the shootings at a synagogue in Pittsburgh and a grocery store in Louisville– happened in the heart and on the outskirts of Appalachia.

This comes after the country experienced its deadliest high school shooting in Parkland, Florida, its deadliest mass shooting at a country music concert in Las Vegas and the hate-fueled assassination of nine African Americans during a Bible study at Emanuel African Methodist Episcopal Church in Charleston, South Carolina. All of these events happened in just the past four years.

Many recent American terrorist attacks have shared something in common: killers who were radicalized, at least in part, online. For years, extremist groups around the world have used social media networks both to connect with people who share their ideology and to recruit people who may be sympathetic to their beliefs. And now, Americans are seeing the results of homegrown terrorists who use the internet both to become radicalized and to radicalize others.

On Saturday, October 27, Robert Bowers posted on the social media network Gab that Jewish refugee resettlement nonprofit HIAS “likes to bring invaders in that kill our people.” He continued, “I can’t sit by and watch my people get slaughtered. Screw your optics, I’m going in.” Two hours later, 11 people were shot and killed at the Tree of Life Synagogue in Pittsburgh’s Squirrel Hill neighborhood. Prosecutors have charged Bowers with their murders. At the top of his Gab profile were the words, “jews are the children of satan.”

Gab, was founded by Andrew Torba, an avid Trump supporter who says he launched the site because of perceived liberal bias on larger social media sites. Since its inception, Gab has been a favorite of alt-right extremists. Although nothing in the site’s policies references the alt-right or white nationalism, its 2017 annual report brags about having “over 50 million conservative, libertarian, nationalist, and populist internet users from around the world” and notes that “[t]hese users are also actively seeking alternative media platforms like Breitbart.com, DrudgeReport.com, [and] Infowars.com,” three other websites known for promotion of white nationalism.

The day before the Pittsburgh massacre, Cesar Sayok had been arrested in Florida, after allegedly attempting to mail bombs to at least 12 high-profile reporters, liberal activists and Democratic politicians, including Pres. Barrack Obama, George Soros, Hillary Clinton, New York Times reporter Sarah Jeong, and Parkland survivor David Hogg. He had posted numerous threats on Twitter, including, “Your Time is coming,” “Your days are over,” “your next,” and “Hug your loved ones real close everytime U leave your home.” He repeatedly hurled threats at one target at a time, before moving on to the next.

As more details come to light on these men’s internet lives, more people are asking the question — Should tech companies be doing more to shut down hate speech on their platforms?

After Saturday’s massacre at Tree of Life and the revelation of Bowers’ posting history, other tech companies quickly severed ties with Gab. Over the weekend, Gab was removed from app stores, payment processors and hosting providers. By Sunday night, Gab was forced offline. This “de-platforming,” said Gab, was a violation of its right to free speech.

To a growing group of people, mostly on the right, silencing hate speech has become akin to censorship and is perceived as a violation of their First Amendment rights. A growing number of right-wing politicians and pundits have jumped into the debate, with people like Ted Cruz joining the likes of conspiracy theorist Alex Jones in equating the enforcement of community guidelines in digital spaces as an act of “tyrannical censorship.”  

The alt-right uses these opportunities to stroke fears of censorship. “If it happens to us,” they ask, “could you be next?” Phrases like “the First Amendment” and “my right to free speech” are often thrown around.

Unfortunately for those who believe any of this is connected to the First Amendment, internet companies silencing hate speech has nothing to do with the constitutionally protected right to free speech. Actions by private companies, by definition, do not violate the First Amendment.

The First Amendment is a limit on the government.

The First Amendment states:

Congress shall make no law . . . abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

Over the centuries, and because of additions to the Constitution, the Supreme Court has found that the First Amendment also applies to state and local governments and the other branches of the federal government, in addition to Congress. But the key is that it only limits the government.

All “free speech” has limits.

Even the most ardent defenders of free speech rights admit that the First Amendment has limits. The Supreme Court has always recognized that certain categories of speech are exempt from protection. These categories include including child pornography, “true threats,” and speech that is both intended and imminently likely to incite violence. This is because these types of speech can have serious, even deadly, consequences, while adding little or no value to public discourse.

For all of its railing against censorship, even Gab has censored users. In August, after Microsoft threatened to drop Gab from its hosting services, Gab removed anti-Semitic posts from a high profile Neo-Nazi who said that Jews should be raised as “livestock’ and he wanted to destroy a “holohoax memorial.” Child pornography has never been allowed. And Utsar Sanduja, the now-former Gab COO, reported threats he received on Gab to law enforcement. These, too, are limits on speech.

Private companies have First Amendment rights, too.

After their site went down on Sunday, Gab implored Trump take action on their behalf, via tweet: https://twitter.com/getongab/status/1056384624899715073

Ironically, this tweet asks President to violate the First Amendment rights of companies that no longer want to do business with Gab.

With the exceptions of protected classes like race, sex national origin, religion, and disability status, contained in civil rights laws, businesses and people have a constitutional right to choose with whom they do business. Anyone may legally refuse to do business with others who, for example, traffic in hate speech or violent rhetoric. People, businesses, and organizations all have a right to free speech and free expression. The President stepping in to force companies to do business with one another would violate their First Amendment rights.

Gab is gone (for now), but the underlying problems still remain.

Social media networks don’t exist in a vacuum. Before PayPal and Stripe terminated their relationships with Gab, they processed payments for them. GoDaddy previously hosted both Gab and Dylan Roof’s white supremacist manifesto. And without app stores and hosting providers, websites that relish in hate speech could cease to exist.

It’s easier to be hateful on the internet than it is in person. But hateful and violent rhetoric don’t end when we close our web browsers. As hate speech has increased online, it has also increased in our daily lives. And as hate speech increases in our daily lives, so do hate crimes. Not only are hate crimes in general on the rise, far-right extremists have nearly three times as many terror attacks in the United States as Islamist extremists.

The digital connection to hate is not a new phenomenon. For years, tech companies have been fielding complaints from users about hate speech and threats, with little to no action — or worse, the wrong action. Facebook’s Community Standards and and Twitter’s Rules both purport to remove threats of violence and hate speech, but a quick search on either platform for a racial slur or demeaning term for a woman will show that many such posts remain. But it doesn’t stop there. Facebook has also determined that it is “hate speech” to say “men are trash” and “men are scum,” and routinely bans women for such comments, which are picked up automatically by an algorithm. And yet the racial and ethnic slurs remain.

What tech companies say.

Tech companies give a number of answers when asked about content moderation after attacks like the ones we have experienced recently. Facebook, while continuing to expand as fast as it can, says there are just too many users and posts to catch all hate speech and threats. Twitter CEO Jack Dorsey, known to Twitter users as @jack, is frequently tagged by users who are asking him to ban Nazis. In 2017, after outcry over the growing number of vocal white supremacists, Twitter decided to give users more characters for their display names. (Yeah, I didn’t get it, either.) Many Twitter users responded by using their additional characters to protest and ask Jack to remove hate speech, with usernames like “Would Prefer You Ban Nazis” seen across the site.

Some even go as far as to invoke the Civil Rights Era, when the government and KKK alike used violence and intimidation to stifle the speech of protesters and activists. Many argue that censorship is a slippery slope, and what is done toNazis today could be done to Black Lives Matter tomorrow. After a number of large companies refused to continue to do business with Neo-Nazi website The Daily Stormer, resulting in its temporary demise, internet civil liberties organization EFF put out a scathing statement, stating, “In the Civil Rights Era cases that formed the basis of today’s protections of freedom of speech, the NAACP’s voice was the one attacked.”

Is it really that hard to moderate violent content?

In a word, no. Tech companies could, and should, do much more to stamp down violent rhetoric. Arguments that it will be too time-consuming and expensive for social media websites to police their users in this way often ignore that these companies already have software to filter out hate speech — they just don’t want to use that software here.

After World War II, many European countries enacted laws making it a crime to deny the Holocaust happened. Violation of these laws can result in criminal penalties and hefty fines.

This past summer, after Facebook’s Mark Zuckerberg came under fire for comparing Holocaust denial to simply being mistaken, Germany quickly reminded the tech giant that Holocaust denial is a crime in Germany. German Justice Minister Katarina Barley tweeted, “There must be no room for antisemitism. Verbal and physical attacks are part of that, as well as denying the Holocaust. The latter is being sanctioned here and is being persecuted consistently. #Zuckerberg.” Under German law, social media networks are required to remove flagged content within 24 hours of receiving a report. When it comes to Holocaust denial, rather than removing the posts entirely, Facebook simply uses geotagging software to make those comments inaccessible in countries where they constitute crimes. Any argument that it would be too expensive, or too complicated, to ban hate speech in the United States is refuted by the fact that the software already exists and is used across the European Union.

It’s not as difficult to protect historically marginalized groups as some would have you believe. For example, there is no civilization on Earth where women are not marginalized and discriminated against. There is no country on earth where LGBT people are not discriminated against. Refugees and asylees are, by definition, persecuted. Identifying these groups, even across cultures, is not difficult.

Violent racism, anti-Semitism, and all manner of hate crimes are on the rise. We’re living in a time where hatred and nationalism are globally on the rise, in a world where people can use Facebook to incite genocide, and in a country where terrorist attacks and mass shootings are regular occurrences.

Slippery slopes are rightfully terrifying when they come from the government, which has the power to deprive people of their liberty. But for individuals, distinguishing between Nazis and civil rights activists shouldn’t be difficult.

We shouldn’t have to wait for mass murder for tech companies to take responsibility for the proliferation of hate speech and threats of violence on their platforms. In this moment, tech companies have a chance to take actions to try to stop violence and radicalization.

If tech companies don’t want blood on their hands, they have an obligation to do a better job of monitoring, and not just monetizing, the content they host. If the powers that be find this task to be too difficult, perhaps they should not be the ones in charge.

So, what’s next?

I’ve been politically engaged for most of my life. Until recently, I never seriously worried about Nazis in the U.S. Now, a former leader of the American Nazi Party is running for Congress near my hometown of Wonder Lake, Illinois. This Nazi, and many others like him, use social media to disseminate hate.

The way that companies like Facebook and Twitter currently operate puts much of the onus on the site’s users. Most posts have to be reported by users in order to be removed (although the platforms have repeatedly come under fire for failing to remove hate speech and threats even after being reported by users).

The most effective way to get tech giants to sit up and listen is to hit them in the pocketbook. For many, just leaving social media together may not feel like a viable option. Social media websites can be incredible tools to connect loved ones and share news about important events. Journalists, activists, and politicians rely on social media networks to connect to people. But leaving isn’t the only way to have an impact.

Increasingly, users have launched successful protests by targeting the money behind the problem, alerting brands when their ads appeared in sites carrying sexist, racist or anti-Semitic content. Breitbart quickly lost a number of advertisers in 2017 when a social media campaign targeted companies like Mercedes-Benz and Nordstrom for putting their money there.

If users want tech giants to start to listen to our concerns, we have to hit them where it hurts the most: in the wallet.

Jamie Lynn Crofts is a constitutional and civil rights attorney in Charleston, West Virginia. She is a graduate of Northwestern Unversity School of Law, a former federal judicial law clerk, and previously worked as the Legal Director for the ACLU of West Virginia.

Total
0
Shares
×
You have free article(s) remaining. Subscribe for unlimited access.
Related Posts