AI Content Moderation, Racism and (de)Coloniality

To what extent does the current AI moderation system of platforms address racist hate speech and discrimination? Racialised people's trivial input in decision-making processes, the disregard of their experiences and the expropriation of their labour highlight the colonial nature of present AI content moderation

November 2021. AI Content Moderation, Racism and (De)Coloniality develops a critical approach to AI in content moderation adopting a decolonial perspective. In particular, the article asks: to what extent does the current AI moderation system of platforms address racist hate speech and discrimination?

Based on a critical reading of publicly available materials and publications on AI in content moderation, we argue that racialised people have no significant input in the definitions and decision making processes on racist hate speech and are also exploited as their unpaid labour is used to clean up platforms and to train AI systems. The disregard of the knowledge and experiences of racialised people and the expropriation of their labour with no compensation reproduce rather than eradicate racism.

In theoretically making sense of this, we draw influences from Anibal Quijano’s theory of the coloniality of power and the centrality of race, concluding that in its current iteration, AI in content moderation is a technology in the service of coloniality. Finally, the article develops a sketch for a decolonial approach to AI in content moderation, which aims to centre the voices of racialised communities and to reorient content moderation towards repairing, educating and sustaining communities.

You can read the whole paper here.

Skip to content