AI Community Divided on Israel-Hamas Conflict

AI Community Divided on Israel-Hamas Conflict

Nearly 200 leading figures in artificial intelligence, including researchers and data scientists, signed an open letter last Tuesday via the ‘Responsible AI Community.’ The letter, besides condemning Israel’s actions against Palestinians in Gaza and the West Bank, also denounces the usage of AI technologies in warfare, arguing that such tech should not make the loss of human life more efficient. The letter highlights instances where anti-Palestinian biases are evident in AI-enabled systems.

The letter was reportedly first circulated by Tina Park, who heads inclusive research and design at the Partnership on AI. It calls for the cessation of technology support to the Israeli government and an end to defense contracts with Israel’s military. The letter emphasizes that “history did not start on October 7, 2023,” but the current crisis demonstrates the severe violence facilitated by AI-driven technologies. Israeli forces have targeted over 11,000 locations in Gaza since the conflict reignited on October 7, 2023.

Prominent signatories include Timnit Gebru, AI ethics researcher and founder of DAIR; Alex Hanna, director of research at DAIR; Abeba Birhane, senior fellow of trustworthy AI at the Mozilla Foundation; Emily Bender, professor of linguistics at the University of Washington; and Sarah Myers West, managing director of the AI Now Institute.

In response, several Israeli and Jewish AI leaders have voiced their opposition to the letter, arguing that it fails to mention the Israeli hostages held in Gaza or condemn the attacks by Hamas on October 7. Jules Polonetsky, CEO of the Future of Privacy Forum, expressed distress over the letter’s failure to condemn Hamas. Yoav Goldberg, a professor at Bar Ilan University, highlighted that some AI systems mentioned in the letter have saved Palestinian lives, such as tracking hostages to hasten their rescue and using AI to navigate missiles to avoid civilian casualties.

Shira Eisenberg, an AI engineer in Washington DC, mentioned that AI technologies are crucial in wartime, citing Israel’s AI-powered Iron Dome, which intercepts missiles from Gaza. She emphasized the importance of responsible use of AI but warned against ruling out its wartime use entirely, arguing it could jeopardize safety.

Some have criticized aspects of the letter and related social media comments as anti-Semitic. Yoav Goldberg shared his dismay, feeling betrayed and saddened by the comments from AI researchers who signed the letter. Eran Toch, an associate professor at Tel Aviv University, pointed out that Israeli members of the critical AI community feel very isolated and disappointed by the zero empathy shown by some signatories. He also warned of anti-Semitic conspiracy theories propagated by some community members.

VentureBeat reached out to Gebru and Hanna for comments, but only Gebru responded, stating her focus was on supporting Palestinians rather than engaging in the debate.

The AI community appears to be experiencing a growing divide since October 7. Some leaders have boycotted major events like the Web Summit in Lisbon due to criticisms of Israel’s response to the Hamas attacks. This divide, although significant, is viewed by some as not as critical as other internal disputes within the AI community.

Dan Kotliar from the University of Haifa, who supports Palestinian self-determination and is critical of AI surveillance tools, states that the letter’s implicit support for Hamas casts doubt on the ethical standards of its signatories. Kotliar argues that AI ethics must not fuel extremism and that responsible AI use is crucial, especially when dealing with threats from groups like Hamas.

Talia Ringer, an assistant professor at the University of Illinois, shared their personal grief over the conflict, highlighting the disconnect they felt with the AI research community’s reaction to the events of October 7. They acknowledge the importance of the concerns raised in the letter but express a yearning for an end to the violence and uncertainty about the future of their community relationships.

In summary, this controversy reveals deep rifts within the AI community over the ethical use of technology in conflict scenarios, the portrayal of Israel, and the broader impact on AI ethics and community solidarity.