30 апреля 2024 09:57 / IT новости

Finally. Global AI vendors team up to extract child pornography from training data


Finally. Global AI vendors team up to extract child pornography from training data

April 24, 13:09 Share:

CSAM has already been found in AI training materials (Photo: Freepik)

Google, Meta, OpenAI, Microsoft and Amazon have pledged to review their AI training data and remove any material that may address child sexual abuse. ( CSAM).

CSAM has already been found in training materials for artificial intelligence models. A wide range of companies have finally agreed to combat this phenomenon. Google, Meta, OpenAI, Microsoft, Amazon, Anthropic, Stability AI and others have signed a new set of principles aimed at limiting the spread of CSAM. They promise to ensure that their training datasets do not contain CSAM, avoid datasets with a high risk of including CSAM, and remove CSAM images or references to CSAM from data sources. Companies also undertake to conduct « stress testing» artificial intelligence models to ensure they do not produce any sexually violent images, and only releasing models if they have been assessed for safety in relation to children.

In a blog post, Google said that in addition to complying with the principles, the company also increased advertising grants to the US National Center for Missing and Exploited Children ( NCMEC) to promote its initiatives.

As NV Techno wrote, CSAM was discovered, in particular, in the LAION-5B data set, on which Stability AI trained its Stable Diffusion AI model.

Read also: Do not post photos of children. Pedophiles create child porn using AI — BBC investigation Pornhub has found a way to deal with those who look for child porn. A simple chatbot helped