top of page

Understanding ‘Enhanced’ Intermediary Due Diligence in light of latest regulatory developments concerning ‘Deepfakes’.

Advancement of computer mediated technology has made it possible to create videos impersonating personalities and objects that are computer generated - often by superimposing on an original video having completely different individuals. It may also be synchronised with a fake and digitally manipulated audio.


Even though deepfake technology is ‘mixed-use’ in nature, i.e., having both beneficial and harmful use cases, so far it has been in the news for the latter only such as promoting risky financial activities like crypto trading and gambling using Nirmala Sitharaman's deepfake[1], and Sachin Tendulkar's deepfake, developing objectionable content using likenesses of celebrities[2], influencing public opinion negatively during election times[3], and for carrying out financial frauds[4]. The potential for its harmful uses demands some form of regulatory action.


Technological problems require technological solutions, yet past experiences of dealing with piracy through P2P sharing websites, and crypto trading, suggest that well-balanced and effective rules and regulations are complementary.


It is possible that online platforms may integrate deepfake detection technology that could identify ‘manipulated media’. However, detection solutions have proven a challenge. In July, 2023, OpenAI recalled its own AI detection tool due to “low accuracy” although efforts are being made to improve efficacy and accuracy of such detection tools.[5] Even if low accuracy could be solved, implementing detection tools can quickly become a cat and mouse game with each iteration of software upgrades, as harder to detect deepfake videos may be possible, outpricing the costs of upgrading the detection toolkit.


Therefore, we certainly do need a regulatory framework as well. At this juncture, we must revisit the Indian laws applicable to intermediaries, to see how well they are positioned to tackle deepfakes.

 

Under the existing framework, after coming to know of existence of objectionable content, intermediaries are required to remove it within 36 hours, which could be also stalled until an appropriate court order is received. A list of objectionable content is provided in Rule 3(1)(b) of the Intermediaries Guidelines, 2021. The list includes content that is obscene, pornographic, inflammatory, pirated, misinformation, impersonated, seditious, contains computer viruses, or are betting and gambling apps. As can be seen, Rule 3(1)(b) is sufficiently diverse to cover digitally manipulated content too. It can also be covered under Rule 3(2)(b) in case of obscenity, in which case intermediaries are obligated to ensure removal within 24 hours.

 

It is pertinent to note that the new set of rules made for intermediaries in 2021[6] required platforms to only ‘inform’ users not to upload objectionable content - a reflection of the earlier rules made in 2011[7]. However, this standard was elevated in 2022 through G.S.R. 794(E) dated 28th October 2022 (“2022 amendments”)[8], whereby platforms are required to not only ‘inform’, but also, ‘make reasonable efforts to cause’ users to not upload objectionable content. This has resulted in a slight but significant change in the expectations of the standard of due diligence to be conducted by an intermediary. Ever since such amendments, the nature of ‘reasonable efforts’ was unclear - although its undefined nature means that the industry could expect some dynamism in the intermediaries’ obligations for maintaining online hygiene.

 

Under both the common law and Indian jurisprudential understanding, ‘reasonable’ implies that which is reasonable to a person of ordinary prudence. It is highly context specific, and the scope varies from case-to-case. It could mean ‘any effort’ that is backed by some thinking and bears a nexus with the object to which it is directed.

 

One may guess that the scope of ‘reasonable efforts’ could depend on the intermediary itself. One must be reminded of the 2021 guidelines that recognized an intermediary as ‘significant” (“SSMIs”) if they had 50 lakh or more number of registered users.[9] Accordingly, SSMIs are required to undertake additional compliances that are not applicable to ordinary intermediaries. Therefore, it is possible that ‘reasonable efforts’ could mean different things for different intermediaries depending on their size, scale, and significance.

 

The official press release announcing the 2022 amendments referred to a new ‘partnership model’ between the Government of India (“GoI”) and intermediaries. This expectation of a ‘partnership model’ - suggests that ‘reasonable efforts’ could depend on the outcomes of a negotiation between the GoI and the intermediaries.

 

Towards the same, the Ministry of Electronics and Information Technology (“IT Ministry”) has come out with two advisories so far – which could throw light on the scope of ‘reasonable efforts’ atleast in the context of ‘deepfakes’. IT Ministry’s advisory released on 7th November, 2023 requested platforms to undertake ‘reasonable efforts to identify misinformation and deepfakes’ (and other objectionable content). Additionally, the advisory also required that intermediaries ‘cause users not to host objectionable content including deepfakes’. The advisory also reminded intermediaries that it was their,  ‘legal obligation to prevent the spread of misinformation’.

 

It is the author’s view that said advisory goes beyond the remit of the existing rules, as it imposes an obligation on platforms to ‘identify misinformation and deepfakes’, which obligation is absent in the rules. The existing rules require the intermediaries to take steps vis-a-vis users alone, while the advisory requires that steps be taken vis-a-vis both users and content. Furthermore, asking intermediaries to ‘prevent’ spread of misinformation is expecting a subjective standard of compliance that is broad enough to include untenable censorship measures too, in addition to the ‘reasonable efforts’ that they have to undertake.

 

In a press release on 23rd November, 2023, it was reported that the GoI and other stakeholders stressed on identifying actionables in respect of four pillars - detection, prevention, reporting and awareness. Of the four, two pillars, viz., reporting and awareness - are already factored in the existing regulations. Prevention is factored in some way, as intermediaries are expected to apply ‘reasonable efforts’ in causing users against uploading objectionable content. However, it is surprising that said discussions also managed to include the pillar of detection as well because the same is expecting intermediaries to do something beyond what the law requires. It is another matter that intermediaries may prefer taking voluntary measures to keep digital media sanitized.

 

Despite the advisory issued on 7th November 2023, in what may count as an apparent U-turn, the Minister of State for IT, viz., Mr. Rajeev Chandrashekhar, is reported as confirming the industry’s acceptance of sufficiency of existing rules to deal with deepfakes conclusively. It is possible to interpret this statement as conceding the ineffectiveness of the requirements advised by the IT Ministry on 7th November 2023. This is because confirming that existing rules are conclusive, right after providing an advisory, effectively eggs one to presume that the advisory may have nothing new to offer – apart from the extra-legal expectations that can be safely discarded for lacking any constitutional force.

 

Yet another advisory arrived on 26th December, 2023 which is reported to have been privately circulated to select intermediaries. In effect apart from reiterating the intermediaries’ legal obligations under the existing rules, the advisory requires that:

 

(i)      A list of prohibited content must be communicated clearly and precisely;

 

It remains to be seen whether the existing practice of specifying content categories alone is sufficient or not. It is not clear as to what degree of clarity has to be articulated for highlighting content that is objectionable.

 

(ii)    Reminders to users of the intermediaries’ terms and conditions, rules and regulations be made more frequent and regular;

 

The degree by which the reminders should become frequent is not specified. The existing rules mandate only an annual frequency.

 

(iii)  Communication cautioning users against hosting objectionable content must be presented at every step of user interaction;

 

Currently, such communications are included within the intermediaries’ terms of use and access – constituting prior notice. Increasing the frequency of reminders seems like a good idea for enhancing user awareness, but to what degree user experience may be negatively impacted is a challenge for the intermediaries. It is possible that the industry may pushback on the frequency as part of negotiating the scope of ‘reasonable efforts’.

 

(iv)   Online users be informed of the penal consequences of their actions should they choose to host or publish any objectionable content, by specifically outlining the relevant provisions of the applicable law including but not limited to Indian Penal Code (IPC) 1860, the IT Act, 2000 and such other laws that may be attracted in case of user’s violation of Rule 3(1)(b);

 

(v)     Intermediaries clearly highlight that they are under obligation to report legal violations to the law enforcement agencies under the relevant Indian laws applicable to the context;

It is strange that while the advisory requires intermediaries to inform users specifically of legal consequences of user’s violations, intermediaries are not required to be as explicit about the laws under which they may report such violations to authorities. This vagueness could have a chilling effect on users who may negatively presume that they could be sent to the police by the intermediary should they choose to post anything useful but critical or politically sensitive speech. Possibly if the intermediaries succeed in communicating clearly and precisely about the objectionable content that is prohibited, such concerns of chilling effect may be reduced to some extent.


The official press release announcing the advisory also mentioned that the IT Ministry would closely observe compliance with the foregoing, and may follow up with further amendments, if and when required.


[1] Nirmala Sitharaman, the country’s finance minister, under whom the Government of India (“GoI”) has avoided any encouragement of crypto tokens and coins, is appearing to be "endorsing" crypto trading. Video of Nirmala Sitharaman Promoting Crypto Trading Platform Is a Deepfake.

[2] Recently, in a viral Instagram reel, Rashmika Mandanna "appeared" in a video without ever facing the camera taking that video. Around the same time, Kajol surprised everyone by ‘appearing’ to be an online fashion influencer in a GRWM (get ready with me) video. After Rashmika Mandanna, Kajol's Deepfake Video Goes Viral.

[3] Deepfakes deceive voters from India to Indonesia before elections.

[4] News portals have also reported on video calls made to victims of cyber-crimes, who are persuaded on the call to transfer money online to fraudsters acting behind synthetically developed videos using the likeness of a person known to the victims.

[5] From ‘Low Rate of Accuracy’ to 99% Success: Can OpenAI's New Tool Detect Deepfakes?

[6] The intermediaries guidelines as notified in 2021 have been amended subsequently in 2022 and 2023, and do not reflect the updated framework.

[7] The intermediaries guidelines, originally published in 2011 were superseded by the guidelines notified in 2021.

[8] The amendments made in 2022 added different wordings for section 3(1)(b) resulting in a slight but significant change of intermediaries’ obligations.

[9] Through a subsequent notification, the GoI issued a clarification that for an intermediary to be considered as a significant social media intermediary, the threshold number of registered users shall be 50 lakhs.




Screengrab of fake Sachin Tendulkar performing in a video for promotion of a gambling app
Tackling Deepfakes would require settling dust on expectations of 'intermediary due diligence'

Important Disclaimer: The information provided herein this article is our interpretation and understanding of the law. The legal analysis presented hereinabove is not given for application to any specific set of facts or circumstances peculiar to you or your organization. You may rely on the write-up for your peculiar facts or circumstances at your sole risk (or benefit) only. We will not be liable, answerable or responsible to you under any client-attorney relationship.

bottom of page