How AI Could Complicate the 2024 US Election Landscape

How AI Could Complicate the 2024 US Election Landscape

Subscribe to our daily and weekly newsletters to stay updated with exclusive industry-leading AI content.

Generative AI is expected to create significant challenges during the 2024 US elections. This is anticipated from both chatbots and deepfakes. At the same time, AI regulation efforts are likely to be delayed due to political factors, according to Nathan Lambert, a machine learning researcher at the Allen Institute for AI. He also co-hosts The Retort AI podcast with fellow researcher Thomas Krendl Gilbert.

Lambert shared with VentureBeat that it is unlikely that AI regulation will be implemented in the US in 2024, especially since it’s an election year and the topic is highly controversial. He noted that the US election will play a crucial role in shaping the discourse around AI usage, including how different candidates position themselves and how the misuse of AI products is addressed by the media.

As election campaigns ramp up, tools like ChatGPT and DALL-E will likely be used to generate content, leading to complex situations whether the AI usage is attributed to campaigns, malicious actors, or companies like OpenAI.

Even though the 2024 US Presidential election is nearly a year away, there’s already rising concern about AI’s role in political campaigns. For instance, ABC News reported that Florida governor Ron DeSantis’ campaign used AI-generated images and audio of Donald Trump during the summer.

A recent poll by The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy revealed that nearly 6 in 10 adults believe AI tools will amplify the spread of false and misleading information during the upcoming elections.

Big Tech companies are starting to address these issues. Google recently announced plans to limit the types of election-related prompts its chatbot Bard and search generative experience will respond to prior to the election. These restrictions are expected to be in place by early 2024.

Meta has also stated that it will prohibit political campaigns from using newer generative AI advertising tools. Additionally, Meta will require advertisers to disclose the use of AI tools in creating or altering election ads on Facebook and Instagram. OpenAI has reportedly updated its policies to better manage disinformation and offensive content on ChatGPT and other products as the election approaches.

However, Wired reported that Microsoft’s Copilot (formerly Bing Chat) has been disseminating conspiracy theories, misinformation, and outdated or incorrect information, with new research suggesting these issues are systemic.

Lambert emphasized that maintaining clean and accurate generative AI content for the election narrative might be “impossible.” Alicia Solow-Niederman, an associate professor of law at George Washington University Law School, added that generative AI tools could have significant implications for democracy. She highlighted the concept of the ‘liar’s dividend,’ which suggests that in a world where truth becomes indistinguishable from falsehood, trust diminishes and the foundation of democratic governance weakens.