Google Introduces Watermarks to Distinguish AI-Created Images

526 viewsTechnology

Google Introduces Watermarks to Distinguish AI-Created Images

In a bid to counter the spread of false information, Google has revealed an innovative invisible watermark for images that can establish their origin as computer-generated.

Termed SynthID, this technology embeds an imperceptible watermark directly into images generated by Imagen, one of Google’s recent text-to-image generators. The AI-created label remains intact even when modifications like filters or color changes are applied.

SynthID doesn’t just mark images, it also examines incoming visuals to determine the likelihood that they were produced by Imagen. This is done by scanning for the watermark with three levels of certainty: detected, not detected, and possibly detected.

“While this technology isn’t flawless, our internal assessments indicate its accuracy against various common image alterations,” Google stated in a blog post on Tuesday.

A preliminary version of SynthID is currently accessible to certain Vertex AI customers, Google’s developer platform for generative AI. The company notes that SynthID, developed by Google’s DeepMind division in collaboration with Google Cloud, will evolve further and might be incorporated into other Google products or third-party offerings.

Addressing Deepfakes and Manipulated Photos

With deepfakes and manipulated visuals growing more realistic, technology firms are racing to find effective ways of identifying and flagging altered content. In recent times, an AI-generated image of Pope Francis wearing a puffer jacket went viral, while AI-created depictions of former President Donald Trump being arrested gained widespread circulation prior to his indictment.

Vera Jourova, Vice President of the European Commission, called upon signatories of the EU Code of Practice on Disinformation – including Google, Meta, Microsoft, and TikTok – to “implement technology that can detect such content and clearly indicate this to users” in June.

Tracking Content Origin

The Coalition for Content Provenance and Authenticity (C2PA), backed by Adobe, has been at the forefront of digital watermark initiatives, while Google has taken a more individualistic approach.

In May, Google introduced a tool named “About this Image,” allowing users to trace the original indexing date of images on its platform, their initial appearance, and their presence elsewhere on the internet.

Additionally, Google announced that each AI-generated image produced by the company will include a mark in the original file to “provide context” if the image appears on another website or platform.

Yet, as AI technology advances at a pace that humans struggle to match, it remains uncertain whether these technological solutions can completely tackle the issue. OpenAI, the entity behind projects like Dall-E and ChatGPT, acknowledged earlier this year that its attempt to identify AI-generated text, rather than images, is “imperfect” and advised taking it with caution.

Nimesh Asked question August 30, 2023
0