Subscribe to our daily and weekly newsletters to receive the latest updates and exclusive content on leading AI developments.
Last week, OpenAI introduced its GPT Store, where third-party developers can list and monetize their custom chatbots (GPTs). But that’s not all for the first month of 2024. On Monday, OpenAI shared a blog post detailing new safeguards for its AI tools, particularly the image generation model DALL-E and citation methods in ChatGPT. These efforts aim to combat disinformation as multiple countries gear up for elections later this year.
The blog post emphasizes the need for collaboration in protecting the integrity of elections and ensures that OpenAI’s technology is not misused to undermine the democratic process. It mentions safeguards like a “report” function that allows users to flag “potential violations” related to custom GPTs, such as impersonations, which are against OpenAI’s usage policies.
OpenAI also revealed that users will soon have access to real-time news reporting on ChatGPT, including attributions and links. This feature aligns with the company’s partnerships with news outlets like the Associated Press and Axel Springer.
One significant update is the implementation of image credentials from the Coalition for Content Provenance and Authenticity (C2PA). This non-profit initiative by tech companies and trade groups aims to label AI-generated content with digital watermarking for easy detection. OpenAI plans to integrate these credentials into DALL-E 3 imagery early this year, although a specific date hasn’t been announced.
Additionally, OpenAI previewed its “provenance classifier,” a tool to detect AI-generated images. This classifier was first mentioned when DALL-E 3 launched for ChatGPT Plus and Enterprise users. OpenAI’s internal tests show promising results, even with modified images. The tool will soon be available to select testers, such as journalists and researchers, for feedback.
With political entities like the Republican National Committee in the U.S. already using AI for campaign messaging and impersonating rivals, the challenge remains whether OpenAI’s measures will effectively counter the anticipated surge in digital disinformation. While it’s hard to predict, OpenAI aims to position itself as a promoter of truth and accuracy, despite the potential misuse of its tools.