
New Delhi: The government on Tuesday, February 10, brought in stricter obligations for online platforms on handling artificial intelligence (AI)-generated and synthetic content, including deepfakes, saying platforms such as X and Instagram must take down within three hours any such content flagged by a competent authority or courts.
The government notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, that formally define AI-generated and synthetic content. The new rules will come into force from February 20, 2026.
The amendments define “audio, visual or audio-visual information” and “synthetically-generated information,” covering AI-created or altered content that appears real or authentic. Routine editing, accessibility improvements and good-faith educational or design work are excluded from this definition.
Key changes include treating synthetic content as “information.” AI-generated content will be treated on par with other information for determining unlawful acts under IT rules.
Platforms must act on government or court orders within three hours, reduced from 36 hours, according to a gazette notification issued by the Ministry of Electronics and Information Technology (MeitY).
User grievance redressal timelines have also been shortened.
The rules require mandatory labelling of AI content. Platforms enabling creation or sharing of synthetic content must ensure such content is clearly and prominently labelled and embedded with permanent metadata or identifiers, where technically feasible, it said.
Calling for ban on illegal AI content, it said platforms must deploy automated tools to prevent AI content that is illegal, deceptive, sexually exploitative, non-consensual or related to false documents, child abuse material, explosives or impersonation.
Intermediaries cannot allow removal or suppression of AI labels or metadata once applied, it said.
