I assure you, the title is not a provocation. It is a legitimate inquiry based on the latest report from Amnesty International on their actions in Ethiopia. Essentially, Meta manipulated their algorithms to maximize engagement, which meant, in the words of Amnesty International, Facebook “… supercharged the spread of harmful rhetoric targeting the Tigrayan community, while the platform’s content moderation systems failed to detect and respond appropriately to such content.” This contributed to at least one murder, and possibly more. Facebook’s moderation response was so poor that people essentially gave up trying to report the vile content its algorithms were promoting because nothing was ever done about it.
Well, I can hear you say, how can you blame Facebook for knowing how people would react to hateful content being force fed to them by its algorithms? To ask the question is to answer it, but if you want more evidence: this is at least the second time that Facebook’s engagement at all costs playbook has exacerbated ethnic tensions to the point of stoking violence. Three years ago, Facebook was a key contributor in the ethnic violence in Myanmar. In fact, whistle blower testimony and documents show that even then Facebook knew what their engagement practices were doing to the country but that executives, likely including Zuckerberg himself, refused to allow safety teams to intervene because such intervention would drive down engagement and thus ad revenue.
And Facebook allowed it to happen again. Why wouldn’t they? No one responsible for these decisions faced any consequences. In fact, they were likely rewarded for their behavior by Facebook itself for increasing revenue.
While the people most responsible for the murders are those who committed them, we would have no trouble with the concept of incitement if it happened anywhere other than the internet. The idea that the internet is somehow special, that we must not ever hold companies like Facebook liable for their actions or we cannot have online reviews is right wing, libertarian bullshit designed to ensure that corporations can make as much money off as blood and misery as they want.
Facebook is making an editorial decision to promote hate via its algorithms. Human beings are ultimately responsible for the output of those algorithms, for what they focus on and emphasize. The idea that such choices must be treated as sacrosanct or we can never have internet speech is a fairy tale. Facebook’s algorithms are a product and need to be treated as such. We did not excuse the Rwandans who used their radio station to incite genocide and yet somehow free speech survived. We can do the same for Facebook’s leaders.
I know this attitude, this concern for people, does not mark me as a serous person, as one who is sober enough to be taken seriously in the deep discussions about how the internet is different and special and a golden unicorn that we must all tiptoe around and pretend cannot be treated like all the other animals in the zoo. I don’t give a shit.
I am tired of people ignoring the very real harms that products like Facebook do and the very real ways in which we have balanced dealing with those harms and the notion of protecting speech in every other aspect of society for hundreds of years. To argue that the executives at Facebook, the ones who direct the algorithms, should be protected is to place human beings below nothing more than profit. To argue that we must allow incitement to genocide in order to have speech on the internet is so intellectually and historically bankrupt a position as to leave me no choice but to think you simply do not care about actual human beings.
Put Zuckerberg in the dock at the Hague. Subject Facebook to full discovery and let him defend Facebook’s actions in front of a jury. It might not be serous and sober minded, but it certainly would be just.