OpenAI Explores Measures to Enhance Content Transparency

With the generative AI content wave steadily engulfing the broader internet, OpenAI has today announced two new measures to assist in facilitating more transparency in online content, and ensuring that people are aware of what’s real, and what’s not, in visual creations.

First off, OpenAI has announced that it’s joining the Steering Committee of the Coalition for Content Provenance and Authenticity (C2PA) to help establish a uniform standard for digital content certification.

As per OpenAI:

“Developed and adopted by a wide range of actors including software companies, camera manufacturers, and online platforms, C2PA can be used to prove the content comes a particular source.

So essentially, as you can see in this example, the aim of the C2PA initiative is to develop web standards for AI-generated content, which will then list the creation source in the content coding, helping to ensure that users are aware of what’s artificial and what’s real on the web.

Which, if it’s possible, would be hugely beneficial, because social apps are increasingly being taken over by fake AI images like this, which many, many people apparently mistake as legit.

Facebook AI post

Having a simple checking method for such would be a big benefit in dispelling these, and may even enable the platforms to limit distribution as well.

But then again, such safeguards are also easily mitigated by even slightly savvy web users.

Which is where OpenAI’s next initiative comes in:

“In addition to our investments in C2PA, OpenAI is also developing new provenance methods to enhance the integrity of digital content. This includes implementing tamper-resistant watermarking – marking digital content like audio with an invisible signal that aims to be hard to remove – as well as detection classifiers – tools that use artificial intelligence to assess the likelihood that content originated from generative models.”

Invisible signals within AI-created images could be a big step, as even screenshotting and editing such won’t be easy. There will be more advanced hackers and groups that will likely find ways around this as well, but it could significantly limit misuse if this can be implemented effectively.

OpenAI says that it’s now testing these new approaches with external researchers, in order to determine the viability of its systems in visual transparency.

And if it can establish improved methods for visual detection, that’ll go a long way towards facilitating greater transparency in AI image detection.

Really, this is a key concern, given the rising use of AI-generated images, and the coming expansion of AI-generated video as well. And as the technology improves, it’s going to be increasingly difficult to know what’s real, which is why advanced digital watermarking is an essential consideration to avoid the gradual distortion of reality, in all contexts.  

Every platform is exploring similar measures, but given OpenAI’s presence in the current AI space, it’s critical that it, in particular, is exploring the same.

Read original article here

Denial of responsibility! Yours Bulletin is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@yoursbulletin.com. The content will be deleted within 24 hours.

Leave a Comment