This week hasn’t been a very good one for pro-Trump and far-right groups. First, Reddit deactivated “The_Donald” subreddit, among many others, due to repeated incidents of harassment and hate speech. Next, Twitch said it was temporarily suspending the President’s campaign account for violating its policies on “hateful content.” Then, YouTube announced that it would pull the accounts of white supremacists such as David Duke and Richard Spencer, many of whom are also pro-Trump, for similar reasons. On Tuesday, Facebook said it has banned pro-civil-war “boogaloo” groups on its platform too.
It might seem that at long last, the day of reckoning has come for internet platforms that once were the breeding grounds for hate speech and harassment. But while these actions are commendable, it’s worth remembering that these platforms have tolerated such activity for years. It is only when public opinion — and by extension, profits — are on the line, that they’ve chosen to do something about it. Hate and harassing speech still reigns on many social media sites, including the ones above. The problem is far from over, and these companies need to do a lot more to regain public trust. They need to make sure hate speech has no place on their platform, this week and beyond.
To be clear, online hate groups have been around for far longer than Trump’s presidency. There have always been fringe groups on the internet — see 4chan and 8chan, for example, where forum users harassed women in the video game industry as part of Gamergate. They are often a radicalizing force, and connected to real-life violent acts like the shootings in El Paso and the mosque in Christchurch, New Zealand.
“What we’ve found is that hateful actors are early adopters of technology,” Henry Fernandez, a Senior Fellow of the Center for American Progress Action Fund, told Engadget. Fernandez is also the co-founder of Change The Terms, a coalition of civil and human rights organizations that is focused on reducing hate online. He cited the recent “Zoombombing” efforts where harassers infiltrated private video chats with pornographic images and racial epithets. “Zoom only became a popular tool a few months ago, and hate groups immediately learned how to use it,” he said. “We’ve seen that on every platform.”
But during Trump’s presidency, such fringe groups have been brought out into the limelight, and their views are now part of the mainstream conversation. Not only does the president have a large following in white supremacist circles, Trump often espouses views that are aligned with the far-right. He tweeted up a storm where he amplified Twitter accounts that promoted a QAnon conspiracy theory about Democrats in a pedophilia cult (which, incidentally, is a theory that is still being spread on TikTok). More recently, he retweeted a video where a Trump supporter was seen shouting “white power” (the retweet was later removed). Polls have also shown that many Americans believe that Trump is a “legitimizing voice” for online hate groups.
Despite it all, social media and internet platforms like YouTube, Reddit, Twitter and Facebook have held strong to a neutral stance. Twitter CEO Jack Dorsey has said that the platform “doesn’t take sides” and Facebook CEO Mark Zuckerberg has said that he doesn’t want the company to be the “arbiter of truth”, in an apparent attempt to persuade right-wing conservatives that they have no bias against them. Unfortunately, this has led to increased hate speech and harassing language posted on social media, which, in some cases, directly contradicts their own policies. Take, for example, Twitter letting the president off the hook for tweeting violent threats against world leaders or the targeted harassment of the Ukraine whistleblower.
It’s only recently that the tide seems to have turned. Even before Monday, Twitter had started fact-checking the president and hiding tweets that glorify violence. Facebook has also taken down a Trump ad for showing a Nazi symbol, and will start adding labels to at least “some” politicians’ posts that violate its policies (No word on whether this will apply to the President).
Some of this is due to the recent rise of the Black Lives Matter movement. Organizations everywhere — from corporate entities to sports institutions — have called for an increase in racial justice awareness. “There’s a racial justice reckoning across the United States,” said Fernandez. “People are protesting in the streets because they believe that progress towards racial justice has been too slow. There’s a real demand for change.”
Fernandez added that part of this could also be due to an increased push for change from inside the companies, like Facebook employees staging virtual walkouts and speaking out against its CEO for not removing the president’s inflammatory remarks. On top of that, he said that advocacy groups like his and that of other human rights organizations have been pushing for change in an organized and unified fashion for years. “We’ve had an ongoing engagement with Facebook, Reddit, Twitter, TikTok etc. for a long time,” he said.
But a lot of this can also be attributed to prudent business sense. Recently, several large corporations such as Verizon, Unilever and Starbucks have pulled their advertising dollars from Facebook and other social media companies. As of last Friday, over 120 companies have participated in the Stop Hate For Profit boycott organized by civil rights groups. Of course, it’s possible that these corporations were looking to cut costs anyway due to the impact of coronavirus, but putting their names behind a boycott like this has the added benefit of making them look good, and it also ratchets up the pressure on Facebook.
The problem is that we shouldn’t forget the companies’ history of letting hate speech slide; none of this is nearly enough. After all, this isn’t the first time internet platforms have taken a stance against hate only to revert to the norm later on. In 2018, for example, Alex Jones’s Infowars was banned from YouTube, Facebook, Spotify, Apple and even from Twitter, despite a short period where CEO Jack Dorsey had to defend Jones’ continuing presence on the site. That, however, didn’t stop far-right leaders like Richard Spencer and David Duke from having a continuing presence on Twitter and tweeting racist and anti-Semitic remarks which, somehow did not run afoul of the company’s policies (they were hate-filled rhetoric, rather than direct hate speech). Twitter did suspend their accounts in 2017, but then restored them soon after.
Plus, Reddit’s removal of The_Donald was a little toothless. Not only had most of the denizens already migrated to their own website, it was part of a broader takedown of 2,000 subreddits across the political spectrum — including leftist podcasting group Chapo Trap House. Benjamin Lee, Reddit’s general counsel, told the Times that “There’s a home on Reddit for conservatives, there’s a home on Reddit for liberals” and “There’s a home on Reddit for Donald Trump.”
Not to mention that Twitter and Facebook are still really only doing the absolute bare minimum. Twitter has so far only fact-checked and hidden Trump’s violating tweets, not removed them. Facebook also said that while it might label violating posts from politicians, it wouldn’t delete them due to their “newsworthiness.”
Fernandez has a few ideas on how tech companies can go beyond just lip service and the occasional act of good will to actually enact serious change. For one thing, he said that removing hate should be their number one priority. “It requires elevating the responsibility and accountability into senior management,” he said. “There should be a single person in senior management where the buck stops on the issues of hate. They should have the authority inside the company to make changes necessary in staffing, training, design, and so forth so that hate doesn’t grow on the platform.”
He added that it’s also vital for companies to have transparency over their decisions. “There needs to be reporting with clarity and depth into how hate operates on the platform, and how they’re removing it from the platform,” he said. Outside experts and researchers need to be able to look at this data and evaluate it themselves as well. In short, it’s not enough to just change a few words in the terms of service. The entire infrastructure needs to change too.
Still, Fernandez is hopeful. “I like to think that companies are looking at Zuckerberg’s behavior and the Facebook ad boycott and thinking to themselves, ‘Let’s not make that mistake,’” he said.
Plus, he said the current moment is ripe for change. “We shouldn’t underestimate how big change can be. For example, both NASCAR and the state of Mississippi have voted to get rid of the confederate flag. That’s huge!” he said. “Now it’s time for Facebook to get rid of its confederate flag.”