•  
  •  
 

UC Law Science and Technology Journal

Authors

Hassan Salman

Abstract

The proliferation of automated content moderation in social media has negatively impacted users’ (individuals, businesses, and governments) selfexpressions. Major social media platforms like Facebook act as public forums for billions of users whose content may vary in terms of acceptability and legality. User content is colored by social as well as personal norms, values and experiences. For example, though blasphemy may be objectionable in Poland, it may not be so in France. However, despite facing some mistrust over how Facebook and other platforms handle user data and moderate content, users rely on the entities like Facebook to correctly filter this type of content in order to maintain clean and safe platforms. Although, due to the volume of user content; the lack of human moderators; and pressures from governments to prevent the spread of illegal or objectionable content such as mass shootings and misinformation, these platforms have increasingly employed machine learning algorithms to filter user content automatically. Automated content moderation often fails to fully differentiate between illegal and legal, though possibly objectionable, content and can result in unnecessary user censure. Moreover, efforts to preempt costly government regulations through proactive self-regulation have exasperated this issue. By comparing how, why and in what manner the United States, the European Union and the Facebook Oversight Board regulate Facebook’s content moderation practices, this article introduces alternative methods of content moderation.

Share

COinS