Tech's new AI cold war
The fight for artificial intelligence dominance is pushing global cooperation to the brink, and the consequences could be catastrophic.

By Tekendra Parmar | Contributing Editor
In November 2023, leaders from 28 countries—including the United States, China, India, Nigeria and representatives from the European Union—gathered at Bletchley Park in Buckinghamshire, England, to sign a historic declaration of cooperation. Echoing treaties from the Atomic Age, the Bletchley Declaration affirmed its signatories’ commitment to design, develop and regulate artificial intelligence while ensuring the safety of humankind.
Days before the summit, then-British Prime Minister Rishi Sunak warned that the existential risk of AI was akin to the threat of nuclear war. To confront these potential societal harms, he announced the world’s first AI Safety Institute. Shortly thereafter, the Biden administration issued an executive order establishing an American AI Safety Institute while advocating for responsible innovation that safeguards workers’ rights. But this collective effort is quickly unraveling. Instead of gathering global cooperation, we are now engaged in a tête-à-tête arms race between AI superpowers.
As a result, the Doomsday Clock, a signal for how close humanity is to annihilation—made by the very scientist who worked on the nuclear bomb—has moved forward one second, to 89 seconds to midnight. We are, at least by this calculation, aggressively moving ourselves toward extinction. One of the chief concerns among the clock keepers, the Bulletin for Atomic Scientists, is the unchecked rise of AI.
In January, the new US presidential administration repealed Biden’s executive orders on AI just hours after President Trump took office. In its place, the administration set forth a new vision for American AI policy—one in which innovation is unfettered by regulatory burdens. Similarly, Trump’s FTC chair, Andrew Ferguson, said AI regulators are on the wrong side of the debate. This anti-regulatory stance, according to the new administration, will help America win the AI race—but at what cost?
The last decade has foreshadowed what unchecked technological growth may impose on society. As Biden warned of the growing power of the tech-industrial complex, these firms have latched on to the renewed jingoism. A week before Trump’s inauguration, OpenAI released a 15-page memo expressing its views on how the United States can beat China in the AI race. If Washington doesn’t act, it warned, AI investments will go to China-backed projects, “strengthening the Chinese Communist Party’s global influence.” The firm is taking a page from former Google CEO Eric Schmidt’s “The New Digital Age,” outlining a new era in which Silicon Valley is the engine of America’s global influence.
As the Trump administration rejects AI regulation, the appetite in the EU for curtailing the power of Big Tech is growing. In March 2024, the European Union passed the AI Act, the most comprehensive piece of AI regulation to date. The act takes a risk-based approach to regulation, banning certain activities, such as using algorithms for social scoring, while imposing stringent rules on AI in national infrastructure, legal systems and other critical areas. After the passage of these regulations, OpenAI CEO Sam Altman—who once warned of the existential risks posed by the very systems he was building—threatened to leave the EU if compliance with its regulations became too cumbersome. The company was at least somewhat successful in lobbying to dilute some of the act’s terms, according to reporting by Time magazine.
But Europe isn’t a unified bloc when it comes to countering Big Tech. France, the host of the recent AI Action Summit, lobbied alongside American firms to weaken portions of the AI Act. The country is home to Mistral AI, one of Europe’s biggest AI firms. France’s position is that stringent regulations may prevent the creation of a European alternative in the face of Chinese and American dominance. “Rather than lamenting that the great digital champions are America today and China tomorrow, let us put ourselves in a position to create European champions,” French president Emmanuel Macron said in a recent speech at Sorbonne University.
Alongside the West’s unsettled debate over regulation and how to compete in the AI race, China’s technological prowess continues to grow. A previously little-known Chinese startup called DeepSeek has suddenly disrupted the Western AI industry by releasing a model as sophisticated as OpenAI’s latest, but trained at a fraction of the cost. The model, which cost only $6 million to make, wiped $1 trillion off the combined value of U.S. tech stocks.
Technologists and policymakers have reacted to this new AI contender with a combination of fear, envy and admiration. “DeepSeek-R1 is AI’s Sputnik moment,” venture capitalist Marc Andreessen wrote on X. But the moment didn’t come out of nowhere. China is the largest producer of AI research in the world. At the same time, it has some of the most robust AI regulations that outline disclosure requirements, model-auditing mechanisms and technical performance standards. Of course, many of these regulations are in service of China’s strict censorship regime: Ask DeepSeek’s R1 about Tiananmen Square and it will tell you that the question is beyond its programming. Question it about the sovereignty and ownership of Taiwan or the Spratly Islands in the South China Sea, and predictably, R1 forcefully stakes China’s claim to the lands.
Despite this apparent censorship, R1 is a soft power coup. Not only will DeepSeek make efficient AI models more accessible to the global market, including in the West, but the company also made its training process public. If the reported costs around DeepSeek’s creation are accurate, and there is evidence to suggest that they aren’t, it may inspire AI development in economies that previously believed themselves priced out of the market. In our current global trajectory, it is, of course, likely that competition, barriers to entry and the power of data will subjugate the former colonized vassals once again.

Tech companies have never been accountable to users in the Global Majority. Meta once promised to bring the benefits of the internet to billions of users across Africa, Asia and Latin America, only to be accused of facilitating ethnic conflicts everywhere from Myanmar to Ethiopia due to unchecked disinformation and misinformation on its platforms. When a court in Kenya tried to hold Meta accountable for its actions, the company argued that the African court held no jurisdiction over the American company. Given this precedent, it is hard to believe that the companies building powerful AI will voluntarily heed the interests of the most vulnerable. It also underscores the importance of a strong Global Majority coalition to help lead AI regulation.
India, the co-chair of the AI Action Summit, is a prime illustration of the dangers AI poses to emerging economies. AI threatens to automate two backbones of the country: outsourcing and manufacturing. The IT industry in India employs nearly 6 million people performing routine programming and data management. Over half of those workers fear losing their jobs to automation within the next five years, according to findings in the country’s latest economic survey. As factories shift toward automation, India risks both job losses and the reshoring of production, with wages for those who remain employed likely to be pushed even lower as they compete with robots at home and abroad.
Yet initiatives to retrain India’s workforce for the AI era have been slow to evolve. The Indian billionaire Nandan Nilekani—known as the cofounder of Infosys as well as the chief architect of Aadhaar, India’s largest digitizing initiative—advertised his foundation’s Springboard program as one such retraining effort in Time’s AI 100 list. It claims to have over 400,000 learners, but it’s unclear whether this initiative can be scaled to meet the volume of AI-related job losses.
Nilekani has recently advocated that, unlike China, India should temper its dreams of entering the AI race and building its own LLM. Instead, it should focus on building data centers and applications that use pre-existing models. The advice echoes one of Silicon Valley’s key talking points to the economically impoverished: Data centers will bring growth and employment to the cities and towns left behind by decades of economic reorientation and offshoring. But from Milwaukee, Wisconsin, to Jamnagar, Gujarat, the jobs produced by constructing these centers will be tenuous at best and will not sustain employment like the traditional manufacturing industries of the past.
India is only one example of how much labor retraining efforts among the Global Majority are lagging. But it is a bellwether for those who will listen. A failure to focus on retraining efforts will aggravate pre-existing issues of civil unrest within developing economies and the resulting mass migrations to developed ones. From the events of the last year, it is evident that the cynicism of global competition has put national interest at odds with the safety and security of humanity. As tech companies and their host nations vie to outcompete each other, AI is already proliferating across our lives. By this year alone, the World Economic Forum estimates AI will displace 85 million jobs globally.
The knock-on effect of these disruptions will not be confined within geographical borders. There’s a pressing need to shift the conversation from competition to collaboration—failing to do so could come at a critical cost not only to global economies but the safety of people everywhere.