Subscribe to our daily and weekly newsletters to stay updated with the latest industry-leading AI news and exclusive content. Learn More
If you visited X, previously known as Twitter, in the last day or two, you likely encountered AI-generated deepfake images and videos of Taylor Swift. These images falsely depicted her in explicit sexual acts with fans of her boyfriend, NFL player Travis Kelce’s team, the Kansas City Chiefs.
These disturbing images sparked outrage among Swift’s fans, with the hashtag ProtectTaylorSwift trending along with “Taylor Swift AI” on X. News outlets worldwide covered the incident as X struggled to remove the content, which kept resurfacing through new accounts.
This incident has reignited calls from U.S. lawmakers for stricter regulation of the rapidly evolving generative AI industry. However, there’s a delicate balance to maintain between curbing harmful content and preserving free speech, parody, and fan art protected by the First Amendment.
The specific tools used to create these deepfakes remain unknown. Popular services like Midjourney and OpenAI’s DALL-E 3 prohibit generating sexually explicit content. Newsweek reported that the X account @Zvbear admitted to posting some images but has since gone private.
Independent tech news outlet 404 Media traced the images to a Telegram group and indicated they were created using Microsoft’s AI tools, particularly Microsoft’s Designer, powered by OpenAI’s DALL-E 3, which also prohibits such content. Despite these restrictions, users find ways to bypass them or use other services, leading to the proliferation of explicit AI-generated images.
Stable Diffusion, an open-source AI model by Stability AI, can be used to create a wide range of images, including explicit content. This led to the image generation community Civitai facing scrutiny for hosting nonconsensual pornographic deepfakes. Civitai is now working to eliminate this type of content. Stability AI’s implementation on Clipdrop also bans explicit imagery.
The misuse of AI tools for creating nonconsensual explicit content is a growing concern. While AI is embraced for creative and consensual projects, such as HBO’s True Detective: Night Country and works by Kanye West and Marvel, its potential for harm cannot be ignored. AI vendors and users must be prepared to address and prevent the creation of offensive content, especially as new regulations could limit AI capabilities.
Reports indicate that nonconsensual explicit images of Swift were uploaded to Celeb Jihad, and Swift is reportedly considering legal action. The targets of this potential lawsuit, whether Celeb Jihad, AI tool companies like Microsoft or OpenAI, or the individuals responsible, are still unclear. This situation underscores the urgent need for regulation of AI tools that can create realistic but harmful depictions of people.
U.S. lawmakers, like Congressman Tom Kean Jr. of New Jersey, are pushing for new AI regulations. Kean has introduced the AI Labeling Act and the Preventing Deepfakes of Intimate Images Act. The AI Labeling Act would require clear labeling of AI-generated content, though its effectiveness in preventing explicit content remains uncertain. Companies like Meta and OpenAI are already working on labeling AI-generated images to curb misuse.
Kean’s second bill, co-sponsored with Congressman Joe Morelle of New York, aims to amend the 2022 Violence Against Women Act to allow victims of nonconsensual deepfakes to sue for damages and seek legal action against the creators. This bill stops short of banning AI-generated images of public figures entirely, which would likely face legal challenges.
Unauthorized depictions of public figures have long been protected as free speech under the First Amendment, even if explicit. Celebrities have successfully sued for commercial misuse of their images, a concept known as the “right of publicity.” If Swift sues, it would likely be under this right.
While the new bills might not aid Swift immediately, they could help future victims seek justice. To become law, these bills need to pass through several legislative steps, including committee reviews, votes in the House and Senate, and a presidential signature.
Congressman Kean highlighted the urgency of addressing AI regulations, referencing a case where Westfield High School students used AI to create fake explicit images of classmates, causing significant distress. Kean’s proposed bills aim to ensure people are aware when they encounter AI-generated content and to create frameworks for labeling and identifying such content.
Stay informed with the latest news in AI by subscribing to our newsletters.