AI is turbocharging the online harassment of women

Generative AI is making the online abuse of women as easy as point-and-click. Is there any way to stop it?

AI is turbocharging the online harassment of women
Taylor Swift performs during the Eras Tour. Photo: Hector Vivas/TAS23/Getty Images for TAS Rights Management

By LORENA O'NEIL

Human rights lawyer Nighat Dad has run a cyber-harassment helpline for Pakistani women for seven years, often counseling survivors who are being blackmailed with intimate partner images – both real and fake. Former partners or acquaintances will threaten to send these images to the survivor’s families, targeting their reputations. The blackmailers ask for sexual or monetary favors in exchange for not distributing the images on WhatsApp, Facebook, and Instagram. In one memorable situation, a survivor called the helpline after her images had been published on 500 websites, and Dad’s organization, the Digital Rights Foundation, worked to get the photos deleted from the internet.

But now, generative AI has made it easier to alter images, and DRF helpline operators have seen a significant increase in realistic-looking but fraudulent intimate images. In a place where images such as these have resulted in honor killings or death by suicide for some victims, Dad is concerned about the life-and-death stakes of such a drastic increase in gender-based online intimidation. “Seventy percent of the Pakistani population live in villages where they have absolutely no idea about [AI] technology,” says Dad. “In some areas of South Asia, just the fact that you are with a man in a picture is enough to kill you in the name of honor.”

This is the new reality facing women as generative AI use skyrockets. With a rise in fake images comes a rise in the weaponization of these images, and the number of new victims is increasing at a scale too large for law enforcement to keep up with. And now, especially given how easy it is to create these images, it’s not just past partners or people with access to intimate images who are potential perpetrators – it’s anyone with access to generative AI programs. All someone needs is a photo of your face, and they have the ability to create an image of you in countless compromising situations. Online abuse has always been a threat to women, and generative AI has sharpened the tools used to commit it.

According to the Associated Press, in 2023 more than 143,000 deepfake videos were posted online, surpassing the total of every other year combined. While deepfakes have been around for years, what’s new is how many people can create realistic, harmful content without any training in programming technology or photo manipulation.

In January, Taylor Swift was the subject of a high-profile case of online abuse when AI-generated sexually explicit images of the singer circulated on X, the site formerly known as Twitter. Swift’s fanbase, known as Swifties, quickly tried to counter the spread, flooding the search results on X with positive tweets about Swift and reporting accounts en masse as X struggled to get on top of the problem. The social media platform eventually removed “Taylor Swift AI” from search. The outcry against the online abuse was massive, with even the White House weighing in on the issue.

Dr. Rumman Chowdhury, an AI expert and social scientist, led Twitter’s ethics team – known as META (Machine Learning Ethics Transparency and Accountability) – before new owner Elon Musk laid off her and nearly her entire team in November 2022. Her group tested Twitter’s algorithms to find out if they perpetuated biases, and she’s always considered it obvious that there is a link between social media companies and online gender-based violence. “Today, we have an insufficient number of tools for women and people of color who are targeted disproportionately online,” Chowdhury says. “Generative AI will be used to supercharge this harassment.”

In 2021, UNESCO published a global study of online violence against women journalists in which 73 percent of survey respondents said they’d experienced online violence, with Black, Indigenous, Jewish, Arab, and lesbian women journalists reporting the highest rates of abuse and most severe impacts, including offline abuse. “Women and people of color are the leading indicators of what will happen to the rest of the world because we’re the ones that are abused first,” says Chowdhury.

Chowdhury led the research on a November report for UNESCO that found the proliferation of generative AI brings new threats for women, including “realistic fake media,” biased outputs, and “synthetic histories” – the construction of an entirely false narrative about an individual.

The image abuse aspect was particularly troubling for Chowdhury and her research assistant, Dhanya Lakshmi, after they ran an informal experiment to confirm how easy it is to create misleading and fake images with a text-to-image generative AI. For the study, Lakshmi and Chowdhury modified images of themselves – they felt it would be disingenuous to use other people’s images without their consent – typing in prompts to change the subject’s clothing and surroundings.

In their published report, the women showed photos that were altered to display a woman wearing a “Blue Lives Matter’’ shirt and another where they selected the clothing and supplied the prompt “Taliban,” resulting in an image of the woman wearing traditional Muslim clothes and holding a gun. Lakshmi said there were photos they didn’t publish in the report that were even more violent, including realistic images modified to show a woman bruised and bloodied.

The program had restrictions on some words; a nude image couldn’t be generated by using the prompt “naked,” but a word like “dominatrix” could prompt similar harmful images. “The restrictions seemed to be more of a black list than a white list,” says Lakshmi. “[But] the general advice for safeguarding inputs to AI systems is to only allow a certain set of prompts rather than blocking some harmful prompts because you’re never going to be able to block every bad outcome.” Still, Lakshmi was stunned by the quality of the results. “It was really realistic, which was the scary part because you were asking it to create images of women that were hurt or in pain.”

At the end of 2022, after the launch of ChatGPT saw a rush of generative AI tools, Melissa Heikkilä, a journalist for the MIT Technology Review, noticed a lot of her friends using the AI avatar app Lensa, so she tried it out. “I was really hoping to get something cool,” she remembers. But when she submitted her photo, she didn’t recognize herself in the avatars. Heikkilä, who is of Asian heritage, says she looked like a generic anime character in some of the images. Even worse, she was sometimes shown topless or in skimpy clothing and posed in an overtly sexualized way.

At first, Heikkilä was disappointed, though not surprised. But once she shared the images with her colleagues, she realized everyone wasn’t getting the same results; her white female colleagues “just looked like better versions of themselves,” she says. It made her mad. Heikkilä says that the photos that looked most like her also made her look more white. Is this what beauty is supposed to be? she thought.

Heikkilä also started looking into how easy it is to make pornified images using generative AI. After she discovered she could generate a deepfaked nude of herself, she deleted all of her images from Twitter and other social media. “I don’t think I’ll be posting any photos of myself, apart from maybe a very professional headshot on LinkedIn,” she says.

Seeking protection by withdrawing from social media is, unfortunately, a common reaction for online abuse survivors. According to UNESCO’s 2021 report on female journalists, this can result in survivors not only disengaging online but also being pushed out of professions where their jobs require them to be in the public sphere. This is partly because there are insufficient tools for protection. And even when there are tools, the onus is usually on the victim to report the abuse and try to take down the misinformation.

The spread of AI-generated deepfaked pornography is affecting adolescent girls as young as high school. In October 2023, NBC News reported more than 30 girls were potentially depicted in explicit, fake AI images created by male classmates at a high school in New Jersey. The incidents spurred an increase in legislation aimed at combating deepfakes, including a bill to criminalize the nonconsensual sharing of sexually explicit deepfakes.

Adam Dodge, the founder of the digital safety education organization EndTAB (Ending Tech-Enabled Abuse), says that online abuse is often misunderstood because people have a hard time understanding the serious outcomes of the abuse, and they instead focus on the technology or the fact that the pictures aren’t real.

“It’s really hard to be trauma-informed when people don’t understand the severity of the trauma to begin with,” says Dodge. In the New Jersey case, parents of the high schoolers were astonished to learn the technology even existed, which frustrated Dodge since deepfake technology is not new. He explains that when a child approaches their parents to share what happened to them, and the parents reply that they can’t even wrap their heads around this happening, it sends a message to the kids that their parents aren’t capable of helping and supporting them to find a solution. Dodge says he also wishes people would stop questioning if an image of online abuse is fake or real. “The harm is authentic regardless of whether they are able to prove or disprove an image is AI-generated or not,” he says.

The solutions for technology-facilitated gender-based online abuse are not easy, nor are they simple. Rather than remediation and punishment, Dodge recommends prevention in the form of proactive education of potential perpetrators... “Let’s be honest, the people who are consuming and creating these AI-generated sexualized images are our men and boys,” he points out. He hopes that by teaching students this is not funny or harmless but abusive – and potentially illegal – behavior, they will make informed choices when presented with the opportunity.

In their UNESCO paper, Chowdhury and Lakshmi said they urge a multipronged approach when formulating policy and educational approaches or creating technological safeguards for generative AI-enabled abuse. They said coordination and action are required between social media companies, policymakers, civil society organizations, individual actors, and generative AI companies. Among the steps they see as necessary are developing better methods of reporting and identifying deepfake content; improving transparency and access to third-party controls to enable innovation in user protection; raising awareness about abusive patterns of behavior; conducting human rights due diligence; and ensuring that transparency, accountability, due diligence, and user empowerment are at the forefront of laws and regulation.

But there was no multipronged solution in place to help even Taylor Swift, a billionaire pop star with tens of millions of fans, a powerful publicist, and the attention of the White House. X’s temporary ban on searches for “Taylor Swift AI” is a recourse that is a rare privilege not afforded to many.

Having worked at Twitter, Chowdhury has a unique view into what it takes to remove a search term. “It speaks to the immense privilege that certain people will have in getting resources put behind them,” she says, “whether because of their wealth, fame, skin color, [or] perceived value by society, which is a very terrible way to handle things.”

It’s also not easy to take down malicious content. A search term might be banned, but others will quickly replace it. Chowdhury likens the situation to New Zealand’s Christchurch massacre years ago, in which the shooter posted his killing spree on multiple social media platforms. But whenever a company took down a video, his global network of supporters posted them again. A similar issue happened with the Swift videos despite the search ban. Terms like “Taylor Swift nude” and “T Swift” still returned results that included explicit deepfakes days after the initial onslaught.

The exclusion of women from public spaces shouldn’t become a go-to solution for online abuse. “Rather than the perpetrator being punished, they get rewarded because now you cannot be found or seen,” says Chowdhury.

At the end of the day, Swift’s legions of fans protected her the best they could, but all women deserve protection. “If we were to actually take these threats seriously in the beginning and not wait until they’re widely available,” says Chowdhury, “who knows how many more people could be protected?”


Lorena O’Neil is a Paraguayan-American journalist based in New Orleans. Her work focuses on the intersection of gender, race, culture, and policy. She’s been published in Rolling Stone magazine, The Guardian US, The Los Angeles Times, and The Atlantic.