Please enable JavaScript to experience the full functionality of GMX.

YouTube makes it possible to request removal of AI content that uses your face or voice likeness

YouTube makes it possible to request removal of AI content that uses your face or voice likeness

YouTube has brought in a new policy that allows users to request the removal of AI-generated content that uses your face or voice.

The new privacy violation rule comes as tech giants get to grips with the issues arising amid a rise in the use of AI-generated content.

A June update on the YouTube Privacy Guidelines document on the Help Center of YouTube Help read: "If someone has used AI to alter or create synthetic content that looks or sounds like you, you can ask for it to be removed. In order to qualify for removal, the content should depict a realistic altered or synthetic version of your likeness."

Those who are reported will have 48 hours to act on the complaint for it to be resolved.

Otherwise, it will be investigated further and they will be required to remove any mention of the individual's name or details from the caption, as well as tags.

Meta now requires users to label AI-generated content on Facebook, Instagram and Threads.

The tech giant wants to ensure people know whether an image, video, or audio is realistic or has been manipulated.

Therefore, Meta is watermarking photos generated by its own AI generator and the likes of Midjourney, Dall-E, and Bing Image Creator.

Such content will be tagged as "Imagined with AI".

Meta said: "It’s important that we help people know when photorealistic content they’re seeing has been created using AI. We do that by applying “Imagined with AI” labels to photorealistic images created using our Meta AI feature, but we want to be able to do this with content created with other companies’ tools too."

Nick Clegg, president of global affairs at Meta, says it's important that people know the difference so there is no misinformation amid global elections.

He added: "If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context."

The move came after a series of deepfake graphic images of celebrities, seemingly created using AI, appeared online, forcing social media sites to take action.

Sponsored Content

Related Headlines