IT (Amendment) Rules, 2025 | Current Affairs | Vision IAS
Monthly Magazine Logo

    IT (Amendment) Rules, 2025

    Posted 12 Nov 2025

    Updated 14 Nov 2025

    6 min read

    Article Summary

    Article Summary

    The amendments aim to regulate synthetic information, enhance platform accountability, and label AI-generated content, addressing Deepfakes and fostering secure, trustworthy online spaces by 2025. 

    Why in the News?

    Union Ministry of Electronics and Information Technology (MeitY) proposed amendments to Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules 2021) to check misuse of Synthetically Generated Information, including Deepfakes.

    More on the News

    • The amended Rules shall come into effect from November 15, 2025 and be called as Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025.
      • Proposed amendments aim to strengthen due diligence obligations of intermediaries' particularly Social Media Intermediaries (SMIs) and Significant Social Media Intermediaries (SSMIs).

    About Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021

    • Originally notified in 2021 and subsequently amended in 2022 and 2023. They prescribe-
      • Framework for regulation of content by online publishers of news and current affairs content, and curated audio-visual content. 
      • Due diligence obligations on intermediaries, including SMIs with objective of ensuring online safety, security and accountability.
    • Defines SMIs and SSMIs as:
      • SMI means an intermediary which primarily or solely enables online interaction between two or more users and allows them to create, upload, share, disseminate, modify or access information using its services.
      • SSMIs mean a social media intermediary having number of registered users in India above such threshold as notified by Central Government.

    Key Features of the proposed Amendments

    Aspect

    Details

    Defines Synthetically Generated Information (SGI)

    • Information that is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that appears reasonably authentic or true.

    Due Diligence in Relation to SGI

    • Mandates that intermediaries offering computer resources enabling creation or modification of SGI must ensure such information is labelled or embedded with a permanent unique metadata or identifier.
    • In case of visual content, label should cover at least 10 percent of the total surface area, and in case of audio content, it should cover the initial 10 percent of the total duration.
    • Label or identifier must enable immediate identification of content as SGI.
    • Prohibits intermediaries from modifying, suppressing, or removing such labels or identifiers.

    Enhanced Obligations for SSMIs

    It requires SSMIs to-

    • obtain a user declaration on whether uploaded information is synthetically generated;
    • deploy reasonable and proportionate technical measures including automated tools or other suitable mechanisms to verify such declarations;
    • ensure that SGI is clearly and prominently displayed with an appropriate label or notice.

    If they fail to comply, the platforms may lose the legal immunity they enjoy from third-party content.

    Senior-level Authorisation

    Any intimation to intermediaries for removal of unlawful information can now only be issued by-

    • Senior officer at joint secretary rank or above in central ministries, their equivalents in states, 
    • Deputy Inspector General (DIG) or higher  in police forces. 

    Challenges in identifying AI Generated Content 

    Description: A close-up of a document

AI-generated content may be incorrect.
    • Lack of benchmarks: Existing AI detectors have high error rates due to lack of standardized benchmarks, thus often leads to False Positives (flagging human content as AI) or False Negatives (failure to identify AI-generated text in document).
    • Lack of capacity: The sheer volume of AI-generated content exceeds the processing and storage capabilities of most detection systems. Further, scaling up detection system will pose financial challenges.
    • Anonymity: AI-generated content/deepfakes can be made anonymously or housed on foreign servers. Further, lack of unified laws or regulatory mechanism hinders cross-border verification and traceability.
    • Imperceptibility: GenAI (Generative AI) generated content blend seamlessly with human created content, making it challenging for detection systems to identify. 
      • E.g., Tools like Midjourney, DALL-E, and Stable Diffusion can generate hyper-realistic images that are often difficult to detect as AI-generated.
    • Balancing Innovation and Privacy: Detection mechanisms often raise privacy concerns due to metadata tracking. Over-regulation could constrain AI innovation while under-regulation risks unchecked spread of misinformation.

    Initiatives taken to tackle Deepfake

    India

    • Election Commission of India (ECI) issued advisory on disclosure of SGI and Al-generated content during elections: All political parties shall ensure that
      • Any synthetically generated or AI-altered image, audio, or video used or disseminated for campaigning purposes shall bear a clear, prominent, and legible label such as "AI-Generated", "Digitally Enhanced", or "Synthetic Content", covering at least 10% of the visible display area (or initial 10% duration for audio content). 
        • The label in the case of video content shall be carried as part of the top hand of the screen.
      • Any instance of AI-altered image, audio, or video, misinformation, or manipulated content detected on official party handles shall be taken down within 3 hours of being noticed or reported.
    • Digital Personal Data Protection Act, 2023: Ensures that personal data is processed lawfully by Data Fiduciaries (including AI companies) with user consent and reasonable security safeguards. Deepfakes using personal data without consent can attract penalties under this Act.
    • Indian Cyber Crime Coordination Centre (I4C): Empowers agencies to issue notices for removal or disabling access to unlawful content including deepfakes under IT Act read with IT Rules, 2021.
    • Others: SAHYOG Portal by I4C, National Cyber Crime Reporting Portal, CERT-In, etc.

    Global

    • EU's AI Act mandates watermarking of synthetic content. 
    • Denmark proposed new deepfake legislation as part of its digital copyright law to protect individuals' rights from the impact of AI-generated deepfakes.
    • China rolled out its AI labelling rules, under which content providers must now display clear labels to identify material created by AI.

    Way Forward

    • Digital framework: A durable solution for AI content detection can be developed based on three pillars: 
      • Establishing a digital provenance framework, similar to Aadhaar, embedding invisible yet verifiable signatures to authenticate content. 
      • Implementing tiered accountability that assigns responsibility based on the role and influence of platforms managing synthetic media.
      • Promoting AI literacy to empower citizens to recognize manipulation.
    • Governance Architecture: Develop regulatory structures, standardized technical protocols, and robust oversight mechanisms to strengthen AI content detection, while balancing privacy and ethical considerations.
    • Watermarking: AI watermarking can provide a solution by embedding indelible markers into AI-generated content, serving as a digital signature that attests to the content's origin and integrity.
      • E.g., China's mandatory AI labeling rules.
    • Establish Global standards for AI-generated content detection: Work towards aligning domestic frameworks with international benchmarks and periodically review policies to ensuring robust and adaptive governance in the AI ecosystem. 
    • Multi-stakeholder approach: Adopt a multi-stakeholder approach by actively engaging government bodies, industry representatives, academia, and civil society in consultations to identify and share best practices for AI content detection.

    Conclusion

    As India steps into the era of AI governance, harmonizing domestic regulations with global standards, fostering innovation-friendly safeguards, and enhancing public AI literacy will be crucial to building a safe and trustworthy online space that upholds both creativity and integrity.

    • Tags :
    • IT Rules, 2021
    • Synthetically Generated Information
    • DeepFakes
    Download Current Article