OpenAI unveiled its new artificial intelligence (AI) image identification and detection tool on Tuesday. The AI ​​firm announced the new tool, highlighting the need to authenticate AI-generated content and create awareness around it. The company also officially joined the Coalition for Content Provenance and Authenticity (C2PA) committee, which created an open standard for labeling AI-generated content. Notably, OpenAI has been using this standard in its Dall-E-generated images since February 2024, and continues to add AI-related information to image metadata.

IN blog post, OpenAI highlighted the new challenges that have arisen with the advent of AI-generated content. The company said: “As generated audiovisual content becomes more widespread, we believe it will be increasingly important for society as a whole to adopt new technologies and standards that help people understand the tools used to create the content, that they find online.” Additionally, the creator of ChatGPT said it is taking two different measures to contribute to the authentication of AI content.

In its first step, OpenAI has officially joined the C2PA committee and named it a widely used standard for digital content certification. The company also emphasized that the standard is followed by a wide range of software companies, camera manufacturers and online platforms. Simply put, C2PA recommends adding information to the metadata of images and other file types to reveal how they were created. While an image taken by a camera will include the name and specifications of the camera, an AI-generated image will include the name of the AI ​​model.

This type of authentication method is used because it is difficult to remove or change metadata from an image, and it persists even if the image is shared, cropped, or altered in any way or form.

Underscoring its second step, OpenAI said it is working on a new tool that can identify AI-generated images. Without naming the tool, the company called it an “OpenAI image detection classifier.” The tool predicts the probability that an image was created by Dall-E. According to the publication, the tool was able to correctly label 98 percent of the images generated by Dall-E compared to real images, despite using filters or cropping the image. However, the tool struggles when AI images of Dall-E are compared to other AI models. The AI ​​firm said that in these cases, the tool makes mistakes in about 5-10 percent of the sample.

However, OpenAI has already opened up the tool for limited public testing and invited research labs and investigative journalism nonprofits to register with the AI ​​firm and gain access to the tool.

Affiliate links may be automatically generated – see our ethics statement for details.