OpenAI hesitates on releasing its AI image detector

In addition to OpenAI, numerous organisations are exploring watermarking and detection techniques for generative media in response to the proliferation of AI deepfakes. Photo: Reuters

In addition to OpenAI, numerous organisations are exploring watermarking and detection techniques for generative media in response to the proliferation of AI deepfakes. Photo: Reuters

Published Oct 24, 2023

Share

OpenAI has engaged in extensive discussions about the release of a tool designed to determine whether an image was created using DALL-E 3, OpenAI’s generative AI art model.

However, the startup has not reached a decision on the matter, as the tool does not yet meet OpenAI’s quality standards.

According to Sandhini Agarwal, an OpenAI researcher speaking to “TechCrunch”, the tool’s accuracy is promising but still not at the desired level.

Agarwal expressed concerns about releasing a potentially unreliable tool, as its decisions could significantly impact the perception of images, including whether they are considered authentic works of art or misleading forgeries.

OpenAI's target accuracy for the tool is exceptionally high, with Mira Murati, OpenAI's chief technology officer, stating that the classifier is currently "99% reliable" at identifying unaltered photos generated by DALL-E 3.

A draft OpenAI blog post shared with TechCrunch indicated that the classifier maintains over 95% accuracy even when images have undergone common modifications such as cropping, resizing, JPEG compression, or the superimposition of text or cutouts from real images on to generated images.

OpenAI's hesitation may be influenced by the controversy surrounding its previous public classifier tool, which aimed to detect AI-generated text from both OpenAI's models and third-party vendors. This tool was pulled due to its low accuracy rate, leading to widespread criticism.

Agarwal also highlighted the philosophical question of what defines an AI-generated image. While artwork created entirely by DALL-E 3 is clearly AI-generated, the classification becomes less clear when considering images that have undergone multiple edits, been combined with other images, and subjected to post-processing filters.

OpenAI states it is actively seeking input from artists and individuals who would be significantly affected by such classifier tools to navigate this complex issue.

In addition to OpenAI, numerous organisations are exploring watermarking and detection techniques for generative media in response to the proliferation of AI deepfakes.

DeepMind has proposed a specification called SynthID for marking AI-generated images in an imperceptible manner to humans but detectable by specialised detectors. French startup Imatag offers a watermarking tool that claims to be resilient to resizing, cropping, editing, and image compression, similar to SynthID. Another firm, Steg.AI, uses an AI model to apply watermarks that can withstand resizing and other edits.

When asked if OpenAI's image classifier would support detecting images created with non-OpenAI generative tools, Agarwal did not commit to it but indicated that it could be considered depending on the reception of the current classifier tool.

"One of the reasons why right now [the classifier is] DALL-E 3-specific is because that's, technically, a much more tractable problem," Agarwal said. "[A general detector] isn't something we're doing right now… But depending on where [the classifier tool] goes, I'm not saying we'll never do it."

Tools to detect AI-generating media seem to be inevitably central to the internet in the future. Without good detectors, society, its news media, and its legal systems will face an impossible task of deciphering what is and is not true. It remains to be seen whether these tools will be ubiquitous and available from several sources (like how Ad Blockers are currently), or if the giant tech conglomerates like Alphabet and Microsoft will have a proprietary stranglehold on what is deemed real.

Reddit

Reddit may take steps to limit the access of search engines like Google and Bing to its content if it fails to secure agreements with generative AI companies to compensate for its data. Originally, a Washington Post report suggested that Reddit might require users to log in to access its content, but Reddit denied this aspect, asserting that "nothing is changing."

However, after a correction by the Post, it now appears that Reddit is considering blocking Google and Bing's search crawlers instead. This would mean that Reddit posts may no longer appear in search results, and cannot be used by search engines to display AI-powered summaries that show up next to search results (a practice often referred to now as ‘Generative Search’).

This reaction from Reddit is less than surprising, considering that Reddit data has been a significant portion of the web data used by companies like Google and OpenAI to train state-of-the-art AI language models, and that Reddit has to date received no public compensation for this.

The Washington Post's report also highlighted how over 535 news organisations have opted to restrict access to their content by companies wanting to train AI models.

This is not Reddit’s first move against the use of its data in other company’s AI applications. Reddit caused widespread protest by its users earlier in the year when it announced changes to its API pricing. This was intended to target large companies which either scrape Reddit data to train AI tools, or aggregate the information without people ever using the Reddit website, causing them to lose advertising revenue. However, these changes also made community-driven applications unsustainable, such as popular apps that change how a user browses Reddit.

Furthermore, X, formerly Twitter, has introduced new pricing tiers for its API access, with owner Elon Musk justifying the move by citing data scraping by AI startups as a reason for implementing reading limits. These developments signal the increasing tension between content platforms, AI companies, and the broader online ecosystem.

James Browning is a freelance tech writer and local music journalist.

BUSINESS REPORT