How to build responsible AI for the Global Majority

How do we ensure AI development prioritizes equity and inclusivity rather than deepening existing divides?

How to build responsible AI for the Global Majority
Photo illustration/Compiler

COMMENTARY By Jonathan Julion

How do we ensure AI development prioritizes equity and inclusivity rather than deepening existing divides? As artificial intelligence reshapes industries worldwide, a crucial question is emerging: Who will truly benefit from these advances? While AI offers tremendous potential, there is a pressing need to ensure its deployment is ethical and safe, especially for regions facing unique challenges.

The Global Majority, comprising countries in Africa, Asia, Latin America and the Caribbean, faces both significant obstacles and immense opportunities in the AI landscape. Despite its vast potential, AI faces high barriers in many parts of the world. One of the most pressing challenges is inadequate infrastructure. Countries in the Global Majority often struggle with unreliable internet access, limited data storage and insufficient computational power—factors that hinder the development and deployment of AI systems. Developing AI tools that are accessible in rural or underdeveloped areas is crucial to bridging this gap.

Additionally, there is concern that AI could exacerbate existing inequalities. For example, facial recognition algorithms often fail to accurately recognize non-Western faces due to unrepresentative training data. Similarly, predictive policing systems developed in one country may produce biased results when applied elsewhere. These examples underscore the need for AI systems that are inclusive and designed to avoid perpetuating inequality.

Another critical issue is the lack of comprehensive regulatory frameworks. In many regions, AI technologies are deployed with little oversight, which leads to risks like data privacy violations and algorithmic biases. To ensure AI serves the public good and doesn’t deepen existing divides, clear ethical guidelines and governance structures are essential. Despite these challenges, AI offers transformative opportunities. In sectors like health care and agriculture, AI is already making headway in addressing some of the Global Majority’s most pressing issues.

In health care, AI has shown great promise in improving diagnostic capabilities in resource-poor settings. In Sub-Saharan Africa, for example, AI is used to detect diseases like malaria, tuberculosis and HIV with greater speed and accuracy than human doctors. These technologies are saving lives and improving health care outcomes, especially in areas with limited access to medical professionals and resources.

In agriculture, AI is helping small-scale farmers in countries like India, Kenya and Brazil optimize their practices. AI systems predict weather patterns, monitor crop health and suggest the best planting times. These technologies are invaluable for farmers who face unpredictable weather and limited access to modern resources, enhancing productivity and reducing risks.

These examples demonstrate that when developed and deployed responsibly, AI can serve as an equalizer, bringing life-saving technologies to underserved communities and transforming traditional industries. However, for AI to reach its full potential, it must be developed transparently, with a focus on the specific needs of the Global Majority.

For AI to succeed in the Global Majority, trust must be at the forefront of its development. Trust is essential for AI’s widespread adoption. People must understand how AI systems make decisions, what data they use and how these systems will be applied. Without transparency, AI systems will struggle to gain the trust necessary for acceptance.

To maintain that trust, AI systems must prioritize safety. As AI becomes more integrated into sectors like health care, finance and governance, the risks of biased decision-making, security vulnerabilities and unintended consequences grow. Rigorous testing and clear accountability structures are essential to mitigate any negative impacts.

Creating trustworthy and safe AI is a collective responsibility. Governments, businesses, academia and civil society must collaborate to develop AI systems that serve the public good. This goes beyond technological innovation—it requires new policies, international standards and ethical frameworks that prioritize equity and sustainability.

To ensure that AI benefits the Global Majority, states should implement three policy principles: international collaboration, tailored national AI policies and corporate responsibility. International collaboration may take the form of developed economies investing in AI infrastructure in emerging economies, while sharing knowledge and supporting local innovation ecosystems. To help narrow the AI knowledge gap, developed economies can help build AI research hubs, fund educational initiatives and promote data-sharing. 

At the same time, the Global Majority should tailor national AI policies to fit their specific needs and context. These policies should prioritize ethical AI use, data privacy and efforts to reduce inequality. Engaging marginalized communities in policy development will ensure their voices are heard and needs are addressed. 

Last, AI companies must prioritize social good over profit. Corporate social responsibility (CSR) should include creating technologies that are inclusive, ethical and sustainable. Companies must commit to auditing their systems for bias and ensuring transparency in algorithm design and deployment.

The future of AI should not be determined solely by the priorities of wealthy nations—it must reflect the diverse needs and aspirations of the Global Majority. By addressing infrastructure challenges, promoting inclusive innovation and establishing ethical frameworks, AI can be a force for good. If we act now, we can ensure that AI benefits people everywhere, fostering a more equitable, trustworthy and prosperous world for all.

Jonathan Julion is an AI ethics expert and philosopher with a concentration in cybersecurity, focusing on the intersection of responsible AI development and digital security.