
Twitter’s Dehumanization Policy states: You may not dehumanize anyone based on membership in an identifiable group, as this speech can lead to offline harm. The Definitions are:
Dehumanization: Language that treats others as less than human. Dehumanization can occur when others are denied of human qualities (animalistic dehumanization) or when others are denied human nature (mechanistic dehumanization). Examples can include comparing groups to animals and viruses (animalistic), or reducing groups to their genitalia (mechanistic).
Identifiable group: Any group of people that can be distinguished by their shared characteristics such as their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, serious disease, occupation, political beliefs, location, or social practices.
You can share your thoughts about Twitter’s Dehumanization Policy by filling out a short survey (located on the same page where the policy is described). The survey will be available until Tuesday, October 9, 2018, at 6:00am PST.
I have filled out the survey. In my opinion, this policy could potentially help clean up Twitter and make the entire platform a nicer, safer, place to visit.
My hope is that the survey will attract people who understand how to give constructive criticism and who also have good ideas to improve the policy. Or, the survey might get swarmed by nefarious people who just want to cause trouble. If that happens, I doubt Twitter will seek comments on whatever other policies they want to enact.
Twitter points out that Susan Benesch, (from the Dangerous Speech Project) has described dehumanizing language as a hallmark of dangerous speech, because it can make violence seem acceptable.
Twitter’s new Dehumanization Policy is designed to reduce (and, ideally, remove) dehumanizing language. The result might reduce violence that starts online and spreads to “the real world”
Image from Pixabay