Fine-tune the AI labelling regulations framework | Current Affairs | Vision IAS
MENU
Home

Periodically curated articles and updates on national and international developments relevant for UPSC Civil Services Examination.

Quick Links

High-quality MCQs and Mains Answer Writing to sharpen skills and reinforce learning every day.

Watch explainer and thematic concept-building videos under initiatives like Deep Dive, Master Classes, etc., on important UPSC topics.

ESC

Daily News Summary

Get concise and efficient summaries of key articles from prominent newspapers. Our daily news digest ensures quick reading and easy understanding, helping you stay informed about important events and developments without spending hours going through full articles. Perfect for focused and timely updates.

News Summary

Sun Mon Tue Wed Thu Fri Sat

Fine-tune the AI labelling regulations framework

13 Nov 2025
2 min

Synthetic Media and AI-Generated Content Regulation

The rise of AI-generated synthetic media necessitates urgent actions across multiple stakeholders to manage the implications effectively. The Indian government has proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, to address the challenges posed by such media.

Key Issues and Concerns

  • Incidents: A viral video featuring Finance Minister Nirmala Sitharaman endorsing a false investment scheme led to financial losses for individuals, highlighting the risks of AI-generated content.
  • Complexity of Labelling: Labeling synthetic or AI-generated content remains complex, especially for mixed media with real visuals but cloned audio.
  • Implementation Challenges: The proposed labeling rules require significant coordination among stakeholders and face challenges in real-world application.

Proposed Amendments and Solutions

  • Labeling Requirements: Platforms must label synthetic media clearly, covering at least 10% of visual or audio content. However, the format and duration of these labels need optimization.
  • Watermark Reliability: Current watermarking by AI companies is unreliable as tools to remove them are readily available.
  • Tiered Labelling System: A system distinguishing ‘fully AI-generated’, ‘AI-assisted’, and ‘AI-altered’ content could improve clarity.
  • Role of Creators: Influential creators should disclose AI use, and voluntary self-labelling can be encouraged among smaller creators.

Challenges in Detection and Verification

  • Technological Gaps: Platforms struggle to detect and label AI-generated content accurately, with limited success so far.
  • Third-party Tools: Reliability of third-party detection tools depends on their training and accuracy.
  • Failure Rates: An audit found low effectiveness in correctly labeling AI content, with only 30% of test posts across major platforms flagged appropriately.

Recommendations and Future Steps

  • Independent Verification: The involvement of expert verifiers and auditors can enhance the credibility and resilience of social media platforms against deepfakes.
  • Public Awareness: Educating users to recognize signs of deceptive content remains crucial.
  • Legal Protection: Upcoming IT laws in India aim to embed principles of caution against too-good-to-be-true content.

Authors Rakesh R. Dubbudu and Rajneil R. Kamath are involved with the Trusted Information Alliance (TIA), advocating for information integrity and user protection online.

Explore Related Content

Discover more articles, videos, and terms related to this topic

RELATED VIDEOS

1
Lateral Entry

Lateral Entry

YouTube HD
Title is required. Maximum 500 characters.

Search Notes

Filter Notes

Loading your notes...
Searching your notes...
Loading more notes...
You've reached the end of your notes

No notes yet

Create your first note to get started.

No notes found

Try adjusting your search criteria or clear the search.

Saving...
Saved

Please select a subject.

Referenced Articles

linked

No references added yet

Subscribe for Premium Features