India AI Content Labelling
The Government of India has proposed draft rules to address the "growing misuse of synthetically generated information, including deepfakes," by mandating the labelling of AI-generated content on social media platforms.
Proposed Amendments
- Social media platforms like YouTube and Instagram must ask users to declare if their content is "synthetically generated".
- Platforms must employ technical measures to verify these declarations.
- Content confirmed as AI-generated should be clearly labelled, covering at least 10% of the surface area for visual media or 10% of the duration for audio.
- If platforms fail to comply, they risk losing legal immunity from third-party content.
Existing Efforts and Challenges
- Meta and Google have begun AI labelling but enforcement is inconsistent.
- Partnerships like the Partnership on AI (PAI) are working to develop industry standards for identifying AI content.
- Most current measures are reactive; labels are often added after content is flagged.
International Context
The conversation about AI-generated deepfakes is significant globally, impacting the entertainment industry and raising concerns about personality rights, especially in India.
- In the EU, AI content must be machine-readable and detectable as artificial.
- China has introduced rules requiring clear labels for AI-generated content.
- Denmark proposes legislation to grant copyright over citizens' likenesses, offering protection against unauthorized AI alterations.