YouTube New Policy Allows Removal Request of AI-Generated Content

Meta is not alone in addressing its platform’s surge in AI-generated content. YouTube also introduced a policy update in June, enabling individuals to request the removal of AI-generated or synthetic content that mimics their face or voice. This change allows users to submit privacy violation requests for such content through YouTube’s established process. This expansion builds upon the platform’s responsible AI agenda, which was first announced in November.

Instead of reporting the content as misleading, like a deepfake, YouTube encourages affected individuals to request removal due to privacy violations directly. According to updated Help documentation, first-party claims are required, with exceptions for cases involving minors, lack of computer access, deceased individuals, or similar circumstances.

However, submitting a takedown request does not guarantee content removal. YouTube will assess each complaint based on various factors before making a judgment.

For example, YouTube may consider whether the content is clearly labeled as synthetic or AI-generated, whether it uniquely identifies an individual, and whether it has value as parody, satire, or public interest content. Additionally, the company may consider whether the AI content features a public figure or well-known individual engaging in ‘sensitive behavior,’ such as criminal activity, violence, or endorsing a product or political candidate – a concern particularly relevant in an election year, where AI-generated endorsements could potentially influence voter decisions.

YouTube will also give the content uploader a 48-hour window to respond to the complaint. If the content is removed within that timeframe, the complaint is closed. Otherwise, YouTube will initiate a review process. Moreover, removal requires complete deletion of the video from the platform and, if applicable, removal of personal information from the title, description, and tags. Users can blur faces in their videos to comply, but simply making the video private is insufficient, as it could be made public again at any time.

The company did not widely publicize the policy change, although it introduced a tool in March allowing creators to disclose the use of altered or synthetic media, including generative AI, in their content. More recently, YouTube began testing a feature that enables users to add crowdsourced notes that provide context to videos, such as indicating parody or misleading content.

YouTube is not opposed to AI usage, having explored generative AI applications itself, such as a comments summarizer and conversational tool for video-related questions and recommendations. However, the platform has previously cautioned that merely labeling content as AI-generated does not guarantee exemption from removal. AI content must still adhere to YouTube’s Community Guidelines, and labeling it as such does not grant immunity from compliance requirements.

In cases where privacy complaints are filed regarding AI-generated content, YouTube will not automatically penalize the original content creator. Instead, the platform will evaluate the situation more delicately before taking any action.

A YouTube representative recently clarified on the YouTube Community site, “For creators, if you receive notice of a privacy complaint, keep in mind that privacy violations are separate from Community Guidelines strikes and receiving a privacy complaint will not automatically result in a strike,” This reassures creators that privacy complaints are handled separately and won’t result in immediate penalties.

Simply put, YouTube’s Privacy Guidelines operate independently of its Community Guidelines. As a result, content may be removed due to a privacy request even if it doesn’t violate Community Guidelines. While YouTube won’t impose penalties like upload restrictions when a video is removed following a privacy complaint, the company reserves the right to act against multiple privacy-issue accounts. This means repeated violations could lead to consequences, but a single instance won’t necessarily result in penalties.