The chilling effects of using A.I. to hunt ‘pro-Hamas’ activists

The State Department is attempting to use artificial intelligence to target visa holders in the United States who express views counter to U.S. foreign policy. Is the real goal legitimizing a political crackdown?

The chilling effects of using A.I. to hunt ‘pro-Hamas’ activists
People gathered outside of a New York court to protest the detention of Mahmoud Khalil at Foley Square on March 12 in New York City. Photo by Michael M. Santiago/Getty Images. Photo illustration by Compiler

ANALYSIS By Tekendra Parmar | Contributing Editor

The arbitrary detention of Columbia University graduate and green card holder Mahmoud Khalil, a leading voice in student protests against Israel’s bombardment of Gaza last year, has sent shock waves across U.S. college campuses since last week. For many foreign students, Khalil’s case and subsequent threats of deportation against scholars who have criticized Israel’s tactics have prompted this question: What are the consequences of speaking your mind on social media?

Two days before Khalil’s arrest, the State Department announced it would use artificial intelligence to scour visa holders’ social media accounts for “pro-Hamas” views, and potentially revoke their visas.

This “Catch and Revoke” policy, according to Alex Abdo, who leads litigation efforts at the Knight First Amendment Institute at Columbia University, clearly violates students’ and visa-holders’ First Amendment rights. The ACLU has also challenged the deportation proceedings against Khalil on the same basis. 

But notwithstanding the program’s constitutionality, there’s the matter of the technology itself. Is this some cutting-edge tool that really can do what officials claim? Or is this just more AI hype?

Experts have already exposed serious flaws in AI tools used to scan and analyze text. One major issue is that these systems are notoriously bad at interpreting most languages other than English, especially those that use non-Latin scripts. For more than a decade, this problem has plagued social media companies that rely on these types of tools to aid content moderation practices. It has given rise to what digital rights activists describe as an over-moderation of Arabic content on platforms like Facebook.

Although the State Department hasn’t mentioned what tools the government plans to use going forward, public records reviewed by Compiler show that the agency experimented with using artificial intelligence to monitor social media as recently as 2023. According to an AI audit completed by the department last year, it used SentiBERTIQ, an AI model designed by Google, to “identify and extract subjective information” from “source material.” The model was trained on tweets from “English, Spanish, Arabic, and traditional Chinese.” Another program used “Louvain Community Detection” to examine social media networks for related communities.

While the AI piece may be somewhat new, the social media-monitoring component of the program is anything but. For nearly a decade, the Departments of State and Homeland Security have been collecting social media information from every noncitizen entering the United States, putting in place the mechanisms for surveilling and deporting people for their espoused values. 

During the first Trump administration, immigration officials began asking visa applicants and foreign visitors to include social media handles on immigration forms. At the time, the administration was unclear about how this information would be used, though for privacy activists, it was a laser sight pointed at the First Amendment rights of visa holders. The Biden administration left the policy in place. And earlier this month, the Trump administration announced plans to expand it to the immigration records of green card applicants.

It’s unclear whether the State Department intends to deploy AI tools it previously used for this new “Catch and Revoke” program. Tools like these may have legitimate uses, such as identifying illegal activities online like selling controlled substances or modeling disinformation networks. But during the Biden administration, the Office of the Director of National Intelligence found that these tools were relatively ineffective at screening people for possible affiliations with terrorist groups.

Alex Hanna, a former Google ethicist and currently the director of research at the Distributed AI Research Institute, says that regardless of the actual AI system used, large language models that fuel many AI systems have historically been biased against minorities. These biases are exacerbated when asking language models to work in a language like Arabic, an especially difficult language for AI because of its various forms.

“It’s going to do a bad job,” says Hanna. “If you’re using something that is trained on written Arabic in newspapers, that’s very different from spoken Arabic or casual Arabic online,” Hanna says.

But in the State Department’s case, the fallibility of the technology may be more of a design feature than a bug. It allows the current administration to create a rubber stamp deportation policy under the guise of artificial intelligence. Combine that with the already mutable definition of what it means to be “pro-Hamas” in American political discourse—which seems to encompass everything from criticisms of Zionism to voicing support for the Oct. 7, 2023, massacre—and you can expect the system to flag plenty of posts, but probably relatively few people who actually represent a terrorist threat.

All told, the AI-driven component of “Catch and Revoke” creates an ideal mechanism for a broad surveillance program that could flag any social media comment objectionable to the Trump administration. According to the Knight First Amendment Institute’s Abdo, the chilling effects of this development are already present on American college campuses. 

“We've talked to non-citizen students and faculty around the country—and people are terrified,” Abdo says. Students and faculty are already self-censoring in fear of deportation, removing social media posts or taking down past scholarship that may be flagged as reprehensible by the administration’s AI. 

“The chill is palpable,” Abdo adds. “The government has been pretty clear that they think that they have the constitutional authority to kick people out of the country—to arrest them and detain them—for expressing views that the secretary of State essentially finds too disagreeable.”

For people from Israel and Palestine who have expressed criticism of Israel’s policies and military operations, the chill is also quite familiar. This type of social media monitoring has been a feature of Israeli surveillance practices since they were incorporated into the country’s counter terrorism law in 2016. The 2021 war in Gaza sparked an uptick in arrests and terrorism charges against Palestinians, in which their posts on social media were brought as evidence. 

And as the Israeli magazine +972 reported last year, the Israeli Defense Forces have also deployed AI to target Hamas members directly—not on social media, but in real life. Military officers speaking on the condition of anonymity told reporters that the automated system known as Project Lavender automatically identified 37,000 potential Hamas militants in the war’s early weeks. 

As the United States launches its AI-powered digital inquiry to find supposed Hamas supporters, the historian and philosopher Hannah Arendt’s concept of the political “boomerang effect” feels more and more relevant. The violence and repression used by European powers to exert imperial control in their colonies, Arendt wrote in “The Origins of Totalitarianism,” eventually laid the groundwork for the rise of totalitarianism and fascism in Europe. Perhaps similarly, the American-backed Israeli war against Hamas is increasingly influencing both the policies and the technologies we are deploying back home. 

The consequences of reliance on AI systems may still seem contained, or at least limited to non-citizens. But the implications for American democracy are profound: Civil liberties watchdogs warn the U.S. may lose its status as a democracy in as little as six months. The Trump administration’s access to these potentially hazardous tools gives it far greater strength to execute what many political experts and international observers worry is a quick slide into a new kind of technology-fueled authoritarianism. 

It may be immigrants now, but as Arendt would warn us, it will be citizens later.