AI safety is a misnomer without Global Majority inclusion
A limited understanding of AI’s real-world impact on the Global Majority means the world’s most populous and under-resourced countries remain at risk.

COMMENTARY By Chinasa T. Okolo
Artificial intelligence safety has emerged as a critical area of inquiry, seeking to ensure that systems operate reliably, ethically and beneficially. However, mainstream AI safety discourse remains largely shaped by Western objectives and priorities, often privileging concerns such as technological alignment and misuse over broader societal and contextual harms.
This narrow framing is reflected in broader concerns about the risks of AI and in the focus of AI fairness and safety research, much of which is produced within and primarily addresses the contexts of Western, Educated, Industrialized, Rich and Democratic (WEIRD) countries. As a result, AI safety efforts are disproportionately designed to benefit stakeholders in high-income nations, frequently neglecting not only the lived realities of marginalized populations within these societies—including Black, Latinx and Indigenous communities—but also those in the Global Majority.
This exclusion is not a small oversight; it risks deepening existing global inequities by failing to account for how AI systems process and interpret non-Western languages, cultures and values while simultaneously amplifying risks for Global Majority communities.
Recent international initiatives, notably the series of AI Safety Summits, have sought to address global AI safety concerns. However, these efforts continue to demonstrate limited inclusion of Global Majority perspectives. In 2023, the UK hosted the inaugural AI Safety Summit to convene government officials, representatives from top AI companies, civil society stakeholders and academic researchers to discuss the risks of AI and to work toward mitigation through technological and regulatory measures. Yet out of the 27 governments represented, only seven—Brazil, India, Indonesia, Kenya, Nigeria, the Philippines and Rwanda—were from low- or middle-income countries.
The AI Seoul Summit in May 2024 saw even less representation, with only three Global Majority countries (India, the Philippines and Rwanda) among the 20 governments in attendance. While it has been unclear which countries will actually participate in the Paris AI Action Summit, a significant increase in Global Majority participation remains uncertain. The launch of specialized AI Safety Institutes further exemplifies the lack of Global Majority participation: While Kenya is represented, the majority of network institutes are based in high-income nations, including the UK, US, Japan, France, Germany, Italy, Singapore, South Korea, Australia, Canada and the European Union.
A significant dimension of AI safety has centered on the advancement of technology benchmarks, which serve as crucial tools for evaluating the capabilities and associated risks of AI systems. However, many of these benchmarks are underpinned by Western-centric assumptions regarding trust, safety and security, which are subsequently reinforced in evaluation methodologies.
Widely used benchmarks like the Massive Multitask Language Understanding exhibit limited coverage of non-Western languages, topics and cultural norms. This gap, compounded by the failure of general-purpose AI developers to enhance the cultural robustness of their systems, results in AI models that perform inadequately in diverse contexts and therefore perpetuate systemic biases. Moreover, technological solutions such as automated content moderation are primarily optimized for English, leaving other linguistic communities vulnerable to misinformation, censorship and harm. These deficiencies underscore the urgent need for AI safety frameworks that account for the linguistic, social and political complexities of the Global Majority to enable the development of safe and reliable AI systems.
The limited empirical understanding of the real-world impact of AI in Global Majority contexts further constrains efforts to develop inclusive safety strategies. To move toward more inclusive AI safety, we must first work to understand the risks AI systems pose to populations and consumers within the Global Majority.
A significant amount of AI fairness research focuses on Western contexts and revolves around Western constructs such as race. While this has yielded important insights into facial recognition bias, it remains insufficient for capturing the multifaceted nature of AI-related harms in non-Western societies. Factors such as caste, tribal affiliation, religious identity and their intersection with other dimensions of social stratification, including gender and socioeconomic status, play a crucial role in shaping the lived experiences of communities in the Global Majority. As AI systems continue to be deployed at scale in these regions, it is imperative that the international community—including frontier AI developers, international standards bodies and multilateral institutions—prioritize a more holistic and contextually grounded approach to AI safety.
Advancing an inclusive AI safety paradigm requires meaningful investment in Global Majority-led research and capacity-building initiatives. Researchers from these regions must be provided with adequate resources to develop contextually appropriate evaluation methodologies. Additionally, Global Majority governments must be equitably represented in international AI governance discussions and afforded substantive opportunities to shape the trajectory of AI safety initiatives. Globalized approaches to AI safety provide a critical opportunity to reshape discourse and practices around responsible AI. By centering the unique risks, opportunities and cultural considerations of Global Majority communities, these efforts can redefine what it means for AI to be "safe" in a pluralistic society. Addressing longstanding structural imbalances in AI safety discourse, alongside intentional investments in research and advocacy for globalized AI safety approaches, will require sustained commitment but is essential to fostering a more equitable AI future.
Chinasa T. Okolo, Ph.D., is a fellow at The Brookings Institution and a recent computer science Ph.D. graduate from Cornell University. Her research focuses on AI governance for the Global Majority, datafication and algorithmic marginalization, and the socioeconomic impact of data work.