| |

Cloudflare stopped supporting hate site Kiwi Farms. How defensible is their justification?

Should internet infrastructure companies desert sites that spread hate? That’s a very hard question. Let’s look at one recent instance — Cloudflare withdrawing its product from Kiwi Farms — and analyze the statement they make to justify their extraordinary actions.

Who is Kiwi Farms?

Kiwi Farms is an online community for trolls who harass people they don’t like. Many of those victims are trans people.

What sort of harassment? Activities of its anonymous users include doxxing (revealing people’s address and other information so others can physically attack them) and swatting (calling law enforcement to falsely report crimes, leading to armed police raiding innocent people’s homes). As NBC News describes:

The forum is a massive archive of sensitive information on their targets, which has been used to repeatedly harass them. Kiwi Farms’ most notorious section is titled “lolcows” and targets transgender people.

The archive often features social media pictures of their targets’ friends and family, along with contact information of their employers. The information is used in an effort to get their targets fired or socially isolated by spreading rumors that they are pedophiles or criminals.

Experts fear Kiwi Farms is starting to target other communities — and that their tactics are being duplicated throughout the political world to intimidate political enemies.

“This kind of Kiwi Farms thing could easily become, say, an FBI agent database. It would not be hard,” warned Fredrick Brennan, who previously worked with the founder of Kiwi Farms at another extremist site, 8chan. Brennan believes that the harassment techniques being used to target trans people now could easily be channeled toward larger goals and government targets — including other recent political targets of the far-right, like the FBI.

Vice reports that Kiwi Farms harassment drove at least three people to suicide. A trans activist named Clara Sorrenti was swatted on August 5 and her and her family were stalked through several changes of location. The police chief who vowed to investigate Sorrenti’s harassment was himself doxxed. Trolls hacked her Uber account and sent hundreds of dollars of groceries to her hotel, in part to reveal that they knew where she was hiding. They even harassed an unrelated person with a similar name.

Cloudflare originally withstood calls to terminate Kiwi Farms

Cloudflare supplies essential services to web sites. This includes distributing content globally, providing protection from DDoS (distributed denial of service) attacks, and hiding the contact information of internet site registrations.

Cloudflare has deplatformed other outrageous sites like 8chan, which hosted the manifesto of a mass shooter. But just weeks ago, the company argued that it has to provide service to everyone who is not behaving illegally, regardless of their activities, to keep the Internet safe. Here’s part of the original blog post:

Some argue that we should terminate these services to content we find reprehensible so that others can launch attacks to knock it offline. That is the equivalent argument in the physical world that the fire department shouldn’t respond to fires in the homes of people who do not possess sufficient moral character. Both in the physical world and online, that is a dangerous precedent, and one that is over the long term most likely to disproportionately harm vulnerable and marginalized communities.

Today, more than 20 percent of the web uses Cloudflare’s security services. When considering our policies we need to be mindful of the impact we have and precedent we set for the Internet as a whole. Terminating security services for content that our team personally feels is disgusting and immoral would be the popular choice. But, in the long term, such choices make it more difficult to protect content that supports oppressed and marginalized voices against attacks. . . .

Just as the telephone company doesn’t terminate your line if you say awful, racist, bigoted things, we have concluded in consultation with politicians, policy makers, and experts that turning off security services because we think what you publish is despicable is the wrong policy. 

Analyzing Cloudflare’s about face

Cloudflare eventually changed its mind and terminated Kiwi Farms. Let’s take a look at how its CEO Matthew Prince justified his decision.

Blocking Kiwifarms

09/03/2022

Matthew Prince

We have blocked Kiwifarms. Visitors to any of the Kiwifarms sites that use any of Cloudflare’s services will see a Cloudflare block page and a link to this post. Kiwifarms may move their sites to other providers and, in doing so, come back online, but we have taken steps to block their content from being accessed through our infrastructure.

This is an extraordinary decision for us to make and, given Cloudflare’s role as an Internet infrastructure provider, a dangerous one that we are not comfortable with. However, the rhetoric on the Kiwifarms site and specific, targeted threats have escalated over the last 48 hours to the point that we believe there is an unprecedented emergency and immediate threat to human life unlike we have previously seen from Kiwifarms or any other customer before.

This is a very strong start for the statement, quite dissimilar to most equivocal and self-serving corporate statements. The first sentence describes Cloudflare’s actions briefly, clearly, and in active voice, including the word “we.” In the first two paragraphs you get a clear idea of what Cloudflare has done and why, with no apology or waffling.

But why block Kiwi Farms and not thousands of other hateful sites? Here’s the explanation:

Escalating threats

Kiwifarms has frequently been host to revolting content. Revolting content alone does not create an emergency situation that necessitates the action we are taking today. Beginning approximately two weeks ago, a pressure campaign started with the goal to deplatform Kiwifarms. That pressure campaign targeted Cloudflare as well as other providers utilized by the site.

Cloudflare provided security services to Kiwifarms, protecting them from DDoS and other cyberattacks. We have never been their hosting provider. As we outlined last Wednesday, we do not believe that terminating security services is appropriate, even to revolting content. In a law-respecting world, the answer to even illegal content is not to use other illegal means like DDoS attacks to silence it.

We are also not taking this action directly because of the pressure campaign. While we have empathy for its organizers, we are committed as a security provider to protecting our customers even when they run deeply afoul of popular opinion or even our own morals. The policy we articulated last Wednesday remains our policy. We continue to believe that the best way to relegate cyberattacks to the dustbin of history is to give everyone the tools to prevent them.

However, as the pressure campaign escalated, so did the rhetoric on the Kiwifarms site. Feeling attacked, users of Kiwifarms became even more aggressive. Over the last two weeks, we have proactively reached out to law enforcement in multiple jurisdictions highlighting what we believe are potential criminal acts and imminent threats to human life that were posted to the site.

While law enforcement in these areas are working to investigate what we and others reported, unfortunately the process is moving more slowly than the escalating risk. While we believe that in every other situation we have faced — including the Daily Stormer and 8chan [similarly lawless sites from which Cloudflare has removed its support] — it would have been appropriate as an infrastructure provider for us to wait for legal process, in this case the imminent and emergency threat to human life which continues to escalate causes us to take this action. . . .

But we need a mechanism when there is an emergency threat to human life for infrastructure providers to work expediently with legal authorities in order to ensure the decisions we make are grounded in due process. Unfortunately, that mechanism does not exist and so we are making this uncomfortable emergency decision alone.

It’s a tough argument to make that your policy is to support all sites, including hateful ones, but that this case is different.

The difference here is that Kiwi Farms have participated in threats to human life, and that’s where Cloudflare draws the line.

It’s interesting that Prince writes that Cloudflare’s action was not “directly” in response to the pressure campaign that activists brought to bear on the company after the attacks. Prince doesn’t want to encourage pressure campaigns against other sites, which he’d hope to resist.

Prince concludes with this:

We will continue to work proactively with law enforcement to help with their investigations into the site and the individuals who have posted what may be illegal content to it. And we recognize that while our blocking Kiwifarms temporarily addresses the situation, it by no means solves the underlying problem. That solution will require much more work across society. We are hopeful that our action today will help provoke conversations toward addressing the larger problem. And we stand ready to participate in that conversation.

That, of course, is mere wishful thinking.

Cloudflare needs a clearer policy. So do all other platforms.

Cloudflare’s statement is clear and, in my opinion, right. Infrastructure providers can’t get into blocking sites because they include nasty things — that fight would be endless. But when a site becomes a knowing nexus for what is, basically, terrorism and lethal harassment, it gives up the privilege of basic infrastructure services like hosting, the domain name system, and protection against DDoS attacks.

Cloudflare’s policy now is, basically, we support everybody unless people are getting killed. That’s too vague. The nastiest and most dangerous sites on the web will see how close they can get to that boundary. And that means more challenging decisions for Cloudflare.

What apps are allowed on the Apple or Google app stores? What sites can hosts and domain registrars continue to support? What companies or organizations are allowed to use platforms like WordPress, Twitter, Facebook, Instagram, or TikTok? These are hard questions.

All platform companies must remain aware of continuing trends in hate, disinformation, harassment, and stochastic terrorism (posting statements that are likely to lead to attacks). Those policies will continue to evolve, and there will continue to be special cases where common sense demands removing dangerous content. Their policies should be as clear as possible — to deter bad actors from attempting to exploit them and to make their own decisions easier. But those policies must evolve as hateful and violent people continue to discover new strategies to spread evil. It’s not an easy balance to strike. But it’s essential to the continuation of the internet as a central part of a civil society.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

3 Comments

  1. What can be done to accelerate the policing of such actions?
    It seems that most of the issues are illegal (and the SWATting is self-inflicted) and the police are the appropriate folks to deal with them. (And it seems that they are not.)

    Are we missing legal mechanisms or other pieces of the solution?

    Any idea what previous information providers/facilitators did? They mention telephone service providers, which have a troubled history for sure. Newspapers contributed to many illegal and otherwise questionable operations in unwitting as well as willing fashions for years.

    What would be a better word/term for “common sense,” which is nonexistent?

  2. Very interesting! I found this sentence most intriguing: “But we need a mechanism when there is an emergency threat to human life for infrastructure providers to work expediently with legal authorities in order to ensure the decisions we make are grounded in due process.” These platforms, and the technology behind them, have marched ahead FAR faster than have our social norms, business practices, and legal systems (and the budgets of law enforcement). Its not just CloudFlare, we’re all just kinda swinging in the wind, trying to figure out how to grapple with the gooey, amoral, underbelly of humanity and the free expression we are granting sociopaths in this brave new world.