Facebook Tests Telling Users if Their Post was Removed by Automation



Facebook posted its First Quarterly Update on the Oversight Board. Part of the update involves sharing their progress on the board’s non-binding recommendations.

…The board’s recommendations touch on how we enforce our policies, how we inform users of actions we’ve taken and what they can do about it, and additional transparency reporting…

Facebook provided some examples of things it has done that the board recommended:

They launched and continue to test new user experiences that are more specific about how and why they remove content. I think this is a good idea, because there will always be someone new to Facebook that hasn’t been there long enough to learn what is, and is not, allowed.

They made progress on the specificity of their hate speech notifications by using an additional classifier that is able to predict what kind of hate speech is in the content: violence, dehumanization, mocking hate crimes, visual comparison, inferiority, contempt, cursing, exclusion, and/or slurs. Facebook says that people using Facebook in English will now receive more specific messaging when they violate the hate speech policy. The company will roll out more specific notifications for hate speech violations in other languages in the future.

I’m not sure that more specific notifications will influence people to stop posting hate speech. A user who is angry about having a post removed might double-down and post something even worse. It is unclear to me if Facebook is providing any penalty for posting hate speech (other than having the post removed).

Facebook is running tests to assess the impact of telling people whether automation was involved in the enforcement. This likely means that if a user’s post is removed because it broke the rules, and the decision was made by automation – the user will be informed of that.

Personally, I think that last recommendation could be controversial. An individual person might get really angry after learning that their post was removed by automation instead of a human. This might lead to the user trying to convince Facebook to have a human check over that post (in the hopes of getting a more favorable result). It that happens a lot, I suspect that political leaders might add to the conversation – with their own recommendations.