A fair process is the essential element for trust and safety
Building fair content-moderation processes is a business imperative for platforms to attract and retain users.

PARTNER CONTENT | Louis-Victor de Franssu and Theodoros Evgeniou
It’s more clear than ever that online content moderation is no longer a technical or policy issue—it is a fundamental global governance challenge. Every day, massive digital platforms exercise immense power in determining what speech is permissible, whose voices are amplified and what content is restricted or removed.
Without a commitment to fair processes, this power can be wielded arbitrarily and without scrutiny, ultimately eroding users’ trust. And trust is the fragile but essential currency of our rapidly expanding digital ecosystem. The only way to maintain trust is to develop fair content-moderation processes that ensure decisions about removing posts, suspending accounts or any other enforcement actions are made transparently, consistently and with due regard for users' rights. Procedural fairness (how content moderation decisions are made) is just as important to people—and typically more so—than distributive fairness (what decisions are made).
At the moment, content moderation faces a dual challenge: the rapid rise of artificial intelligence-driven systems and mounting regulatory demands. Large language models and generative AI can enable platforms to process vast amounts of data at unprecedented speeds, offering the potential to alleviate some of the key challenges of current automated tools that often struggle with accuracy and context.
While these systems can reduce human biases and improve precision, they are far from flawless, however. They frequently fail to grasp nuance, cultural context and evolving trends in harmful content. As companies rapidly continue to automate, they’re bound to face both wrongful removals and undetected violations that are the fault of these systems, underscoring the continued importance of strong human oversight.
At the same time, global regulators are tightening oversight. In the EU, the Digital Services Act requires that platforms implement specific processes and ensure transparency, while granting regulators the authority to monitor compliance. Similarly, the UK Online Safety Act mandates extensive transparency and fair practices, empowering the Office of Communications to hold platforms accountable. Regulation is not limited to Europe. India, Brazil, Singapore and many other regions are adopting new regulatory frameworks. As these regulations take hold, public scrutiny is intensifying, and platforms that fail to uphold fair processes risk both legal penalties and reputational damage.
This dual challenge sheds light on the need to codify fairness directly into the mechanisms of content moderation tools and systems. Fair processes are not just a technical feature or a mere regulatory checkbox. They enable platforms to achieve their core mission and business goals: providing the best experience to users, retaining existing customers and attracting new ones.The keys to developing fair content moderation processes include the following.
Transparency: Users should have easy access to platform policies that clarify why their content was removed or their account was suspended (e.g. the mandatory Statement of Reasons under the Digital Services Act).
Consistency: Moderation decisions should be applied equitably and in a replicable manner across all users, communities and content categories. Both human and AI-based enforcements must be regularly audited with quality controls and live monitoring to ensure impartiality.
Proportionality: The severity of enforcement actions should be proportional to the harm posed by the content, and users should have clarity on the principles underlying such choices. For instance, blanket bans and permanent suspensions should be reserved for the most egregious violations.
Right to appeal: Users should have a meaningful way to challenge moderation decisions, with a structured process for reviewing appeals.
Human oversight: Fair moderation requires human oversight at each stage—design, deployment and evaluation—to facilitate bias mitigation, contextual understanding and adaptability to threats. Real-time monitoring tools should provide access to content and trends overviews, keeping systems in check.
Building fair content-moderation processes is a business imperative that will help platforms attract and retain users by creating the most open yet safe environment for them. Users are more likely to remain engaged on platforms where they feel they are treated fairly, have recourse when moderation decisions affect them and understand how and why decisions about their content are made.The companies that prioritize fairness, transparency and accountability will not only navigate the evolving technological and regulatory landscape more effectively but will emerge as industry leaders.
Louis-Victor de Franssu is the co-founder of Tremau, an online trust and safety company, and the former deputy to the French Ambassador for Digital Affairs.
Theodoros Evgeniou is a professor of technology and business at INSEAD, where he directs the executive training programs on AI, and is a co-founder of Tremau