OpenAI builds tool to detect images created by AI

The capabilities of AI-based image creation tools have advanced to the point where people may mistake them for non-AI images or authentic images. This raises concerns about their potential for misuse.
OpenAI has already introduced watermarks for DALL-E 3-generated images to maintain source transparency and honor authenticity. The company is now developing a new tool that will be able to distinguish between real images and those generated using the company’s DALL-E text-based image generation model.
OpenAI is now developing a new tool that will be able to distinguish between real images and those generated using DALL-E’s text-based image generation model.
OpenAI’s generative AI tools will incorporate C2PA metadata
OpenAI’s official blog has announced that the company is working on new methods for detecting content generated by artificial intelligence. According to the company, their goal is — to help researchers examine the authenticity of content and join the Coalition for Content Authentication and Verification (C2PA), a widely used standard for certifying digital content. This will allow creators to tag and certify their content to validate its true origin.
C2PA will allow creators to tag and certify their content to confirm its true origin.
OpenAI says that once widely launched, C2PA metadata will be integrated into Sora, the company’s video generation model. For those who don’t know, Sora will likely be a premium tool for creating video from text, much like DALL-E 3, where only paid subscribers will have access to it. According to a previous report, Sora will become generally available in 2024.
According to a previous report, Sora will become generally available in 2024.
OpenAI creates new tool to discover content created by DALL-E 3
As mentioned above, OpenAI is also working on a new tool that uses AI to detect images created by DALL-E 3. More specifically, it predicts the likelihood that an image was generated by the tool. According to the company, the image can still be detected after compression, saturation changes or even cropping. In addition, the tools aim to be more resistant to attempts to remove content provenance signals.
The tools are also aimed at being more robust to attempts to remove content provenance signals.
The detection accuracy of DALL-E-generated images is as high as 98%, and more importantly, does not impersonate non-AI images as AI-generated images.
The company has already opened applications for access to the new image detection tool to the first group of testers. It includes research labs and journalism-focused nonprofits, and it intends to gather feedback through the Researcher Access Program.
The company has already opened applications for the first group of testers.