TL;DR
The Indian government has notified amendments to the IT Rules, 2021, specifically targeting AI-generated content. Platforms are now mandated to label AI-generated deepfakes and synthetic media clearly. Failure to comply could lead to loss of "safe harbor" protections and legal penalties for intermediaries.
Vichaarak Perspective
While the move aims to curb misinformation, it places an immense technical and compliance burden on early-stage AI startups. The definition of "synthetic content" remains broad, potentially catching harmless creative tools in its net. We might see a "compliance chill" where Indian AI devs prefer launching globally first to avoid the friction of the new domestic labelling regime. The line between "creative enhancement" and "deceptive synthesis" is thinning, and these rules might force platforms to over-censor to avoid legal risk.
Schema-ready FAQ
- What is the new requirement for platforms? Platforms must now use visible and metadata-based watermarks for all AI-generated or significantly modified content.
- Who does this amendment apply to? All social media intermediaries and digital platforms providing services in India.
- What are the consequences of non-compliance? Platforms risk losing their "Safe Harbor" status under Section 79 of the IT Act, making them legally liable for content posted by users.