Meta Signs Up to New AI Development Principles Designed to Combat CSAM Content

With an increasing stream of generative AI images flowing across the web, Meta has today announced that it’s signing up to a new set of AI development principles, which are designed to prevent the misuse of generative AI tools to perpetrate child exploitation.

The “Safety by Design” program, initiated by anti-human trafficking organization Thorn and responsible development group All Tech is Human, outlines a range of key approaches that platforms can pledge to undertake as part of their generative AI development.

Those measures relate, primarily, to:

  • Responsibly sourcing AI training datasets, in order to safeguard them from child sexual abuse material
  • Committing to stringent stress testing of generative AI products and services to detect and mitigate harmful results
  • Investing in research and future technology solutions to improve such systems

As explained by Thorn:

In the same way that offline and online sexual harms against children have been accelerated by the internet, misuse of generative AI has profound implications for child safety, across victim identification, victimization, prevention and abuse proliferation. This misuse, and its associated downstream harm, is already occurring, and warrants collective action, today. The need is clear: we must mitigate the misuse of generative AI technologies to perpetrate, proliferate, and further sexual harms against children. This moment requires a proactive response.”

Indeed, various reports have already indicated that AI image generators are being used to create explicit images of people without their consent, including kids. Which is obviously a critical concern, and it’s important that all platforms work to eliminate misuse, where possible, by ensuring that gaps in their models that could facilitate such are closed.

The challenge here is, we don’t know the full extent of what these new AI tools can do, because the technology has never existed in the past. That means that a lot will come down to trial and error, and users are regularly finding ways around safeguards and protection measures, in order to make these tools produce concerning results.

Which is why training data sets are an important focus, in ensuring that such content isn’t polluting these systems in the first place. But inevitably, there will be ways to misuse autonomous generation processes, and that’s only going get worse as AI video creation tools become more viable over time.

Which, again, is why this is important, and it’s good to see Meta signing up to the new program, along with Google, Amazon, Microsoft and OpenAI, among others.

You can learn more about the “Safety by Design” program here.

Read original article here

Denial of responsibility! Yours Bulletin is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@yoursbulletin.com. The content will be deleted within 24 hours.

Leave a Comment