Proposed Amendments to IT Rules on AI-Generated Content
The Ministry of Electronics and IT (MeitY) has proposed stricter regulations for social media platforms regarding AI-generated content. These amendments focus on ensuring better compliance with labeling requirements for such content.
Key Proposals
- Social media companies like YouTube, Instagram, and X must continuously display labels on AI-generated content.
- The proposed rules will require a "continuous and clearly visible display" of labels throughout the duration of the content.
- This move is in response to unsatisfactory compliance with previously notified rules that required prominent labeling.
Reasons for Amendments
- Platforms have shown inconsistent compliance, with many AI videos lacking adequate disclosures.
- The intent is to ensure users are aware they are viewing AI-generated content.
- The ministry provided links to non-compliant videos to social media companies as part of the feedback process.
Public Consultation
The proposed amendments are open for public consultation, with comments accepted until May 7.
Definition and Actions Regarding Synthetically Generated Information (SGI)
- SGI excludes assistive and quality-enhancing uses of AI.
- Routine editing of content is not considered SGI.
- Intermediaries must take appropriate actions if their services are used for SGI, including disabling or removing content, and possibly suspending user accounts.
Responsibilities of Big Tech Companies
- Ensure users declare when information is SGI.
- Deploy technical measures to verify information accuracy.
- Once verified, prominently display labels or notices on SGI.
Case Study: Grok AI Incident
The Grok AI incident highlights the potential issues with AI-generated content. X (formerly Twitter) faced global criticism for creating inappropriate images due to user prompts, resulting in bans and changes to its filters.