comScore Tracking
site logo
search_icon

Ad

MeitY Notifies Amendments to Regulate AI-Generated Content and Deepfakes

MeitY Notifies Amendments to Regulate AI-Generated Content and Deepfakes

author-img
|
Updated on: 11-Feb-2026
total-views-icon

319 views

share-icon
youtube-icon

Follow Us:

insta-icon
total-views-icon

319 views

Key Highlights

  • MeitY notified amendments to the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, on February 10, 2026.
  • The amendments introduce regulation of AI-generated content and deepfakes, including a formal definition of deepfakes.
  • Takedown timeline for flagged unlawful AI content or deepfakes reduced to 3 hours from previous proposals.
  • Labelling requirements for AI content made more flexible: must be "prominently" visible instead of covering 10% space.
  • Rules mandate user declarations for synthetic content, periodic platform warnings, and consequences like account suspension.
  • Framework effective from February 20, 2026, coinciding with the last day of the India AI Impact Summit.

MeitY Notifies Amendments to Regulate AI-Generated Content and Deepfakes

The Ministry of Electronics and Information Technology (MeitY) issued a gazette notification amending the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The changes target social media intermediaries and platforms handling user-generated content. They focus on synthetically generated information (SGI), including deepfakes, to address misinformation and misuse risks.

The rules take effect on February 20, 2026, providing a 10-day compliance period from the notification date.

Definition of Deepfakes and Synthetically Generated Information

The amendments define deepfakes as:

  • Audio, visual, or audio-visual information artificially or algorithmically created, generated, modified, or altered using a computer resource.
  • Content that appears real, authentic, or true.
  • Depicts or portrays any individual or event in a manner indistinguishable from a natural person or real-world event.

Routine edits, accessibility features, and good-faith uses (e.g., educational or design) are excluded from this scope.

Stricter Takedown Timelines for Unlawful Content

Platforms must remove deepfakes or other unlawful AI-generated content within 3 hours when:

  • Flagged by a competent government authority.
  • Ordered by a court.

This replaces earlier draft proposals of 36 hours, aiming for faster response to harmful content.

Flexible Labelling and Disclosure Requirements

The notified rules relax previous draft suggestions:

  • AI-generated content must carry a label that is "prominently" visible (removed the 10% space coverage requirement).
  • Platforms cannot allow removal or suppression of applied AI labels or embedded metadata.

Social media intermediaries must require users to declare if a post contains synthetically generated information. Platforms need tools to verify declarations and ensure prominent AI labelling.

User Warnings and Consequences for Violations

Platforms are required to:

  • Inform users at least once every three months that violations may lead to post removal, account suspension or termination, and legal action.
  • Assist in identifying violating users and disclose their identity to complainants when required.

Contraventions can result in suspension or termination of accounts.

Context and Timing

The amendments build on draft proposals from October 2025. The February 20 effective date aligns with the conclusion of the inaugural India AI Impact Summit (February 16–20, 2026) in New Delhi, which focuses on responsible AI deployment and global impact.

What to Watch Next

Platforms like X, Instagram, YouTube, and others must update systems for detection, labelling, and rapid takedown by February 20. Further guidelines or enforcement details may emerge post-summit.

Explore Mobile Brands

Xiaomi
Xiaomi
OPPO
OPPO
Vivo
Vivo
Realme
Realme
Apple
Apple
OnePlus
OnePlus

Ad