A Desideratum for Legislative Intervention to Contain Deepfakes

By: Neeraja Seshadri and Sindhu A


INTRODUCTION

Deepfakes refer to any media or digital representation that has been distorted using artificial intelligence and deep learning. They have been referred to as ‘the 21st century’s answer to photoshop’ because they yield utterly realistic fake media that can muddle one’s understanding of truth. Although the technology is sophisticated and requires some level of expertise, certain apps have made it conveniently accessible to the public. More often than not, the technology is weaponised against women or used as a political antic and the legitimate potential of deepfakes gets lost amidst the misdeeds. The company CereProc used this technology to create synthesised text-to-speech voices for children with communication disabilities, while Disney’s researchers are working towards creating the first megapixel-resolution deepfake. Despite having groundbreaking aptitude in various sectors including science, entertainment, education among others, deepfakes have gained prominence for its harmful use cases. Apart from a couple of scattered legislation in the United States, the rest of the world is unprepared to deal with deepfakes. Furthermore, the current position of law in India does not offer much assistance despite the technology having various implications on the right to privacy, intermediary liability and content moderation.

PRIVACY CONCERNS

Deepfakes present consequential complications with respect to the right to privacy which has been held to be a part of one’s fundamental right to life under Article 21 of the Indian Constitution as laid down in KS Puttaswamy v Union of India. The provisions of the Personal Data Protection Bill, 2019 (hereinafter the ‘Bill’) protect the privacy of individuals, their personal data and provide guidelines for its lawful processing. Section 4 and Section 10 of the Bill can be interpreted to accord protection against deepfakes, because appropriating an individual’s personal data to create a deepfake would be classified as unlawful processing of data under section 4 of the Bill while section 10 requires a data fiduciary to take consent from the data principal for processing their personal data. However, deepfakes have the ability to be used to morph celebrity data, as was witnessed in 2017 when a Redditor superimposed pictures of popular actresses onto performers in pornographic movies, or to resurrect deceased leaders for the purpose of swaying the public. This may give rise to significant apprehensions regarding the privacy of these individuals in particular. Section 2(28) of the Bill defines personal data, however, categorisation of celebrity data as personal data is dubious and is incompatible with their right of publicity as recognised in the case of ICC Development (International) Ltd v Arvee Enterprises. Further, the requirement of obtaining consent from the data principal- in this context, the celebrities- under section 10, would be overlooked due to their content being freely available in the public domain.

Deepfakes of deceased individuals can often wreak havoc when they are influential and well-timed. The position of personal data of deceased individuals remains largely ambiguous as there exists no provision for the same under the Bill. The ideal stance while handling the personal data of deceased persons should be to take the requisite permissions from the legal heirs as the use of such data to a great extent has direct ramifications on these individuals.

The Bill provides the right to be forgotten under section 20 when the principal is of the opinion that the processing of personal data is no longer necessary or was made without the consent of the data principal or is against the provisions laid down in the Bill. However, owing to the limitations of territorial operability of the Bill or refusal of courts to exercise jurisdiction as witnessed in the case of Ramdev v Facebook, the right to be forgotten of the data principal remains an illusion when the data fiduciary acts outside the territorial limits of India.

INTERMEDIARY LIABILITY AND CONTENT MODERATION

The likelihood of the right to privacy of individuals being violated on the internet space is high. This imposes a crucial burden on intermediaries on the internet to ensure that the personal data of users is protected in consonance with their right to privacy. Social media intermediaries faced immense pressure in the run-up to the US Elections to create policies to ban deepfakes due to their dangerous potential to spread misinformation. Facebook in collaboration with Amazon and Microsoft initiated a ‘Deepfake Detection Challenge’ in 2019 to facilitate the development of technology to accurately detect Deepfakes. The winners of the challenge were only able to create a technology that detects deepfakes with a trivial accuracy rate of 65.18%. In light of the same, companies including Facebook, Reddit and Twitter released policies to ban deepfakes. Facebook’s policy involves fact-checking by ‘independent third party fact-checkers’ with over 50 partners worldwide in 40 different languages. These policies are narrowly drafted and are not fool-proof because deepfakes cannot be resolved merely through ‘fact-checking’. While fact-checking curbs misinformation, this alone cannot be used to resolve the other complex issues posed by deepfakes such as personal rights violation, intellectual property rights infringement et cetera.

In the Indian context, under Section 79 of the Information Technology Act, 2000 intermediaries may be ordered to take down unlawful content upon receiving actual knowledge or court order as observed in Myspace Inc v Super Cassettes Industries Ltd. Under the draft of the Information Technology [Intermediary Guidelines (Amendment) Rules], 2018 which is currently under review and yet to take effect, intermediaries are required to incessantly monitor all content and deploy automated tools with appropriate controls to identify and remove unlawful content. At present, most intermediaries carry out content moderation using a mix of Artificial Intelligence and human reviewers. Complying with the revised Intermediary Rules could prove to be problematic in the case of deepfakes as neither human reviewers are capable of identifying them precisely, nor is any technology capable of doing it impeccably.

CONCLUSION

In the era of post-truth, deepfakes can be used as a dangerous weapon to manipulate an individual’s way of thinking. Owing to the fact that most people take things on the internet at face value, deepfakes could be detrimental to the democratic processes or one’s reputation. They have been used to target prominent individuals on numerous occasions and the ambiguity in the provisions under the Bill fail to provide a comprehensive solution to celebrities’ personal data and the data of deceased individuals. The Bill has been criticised for not extending protection to these individuals and this requires immediate consideration with reference to deepfakes. Further, the intermediary guidelines place an extensive burden on intermediaries to proactively monitor and take down unlawful content; this is not possible in the context of deepfakes as intermediaries do not have the access to accurate deepfake detection technology to moderate their content.  In this context, amendments to laws would prove to be futile as they would not be comprehensive to provide a fool-proof solution as deepfakes manifest distinct complications to society. Ironically, the development of new technology is the need of the hour to assist in the effortless detection of deepfakes and combating the existing technology. However, it is quintessential that laws are also involved in the process of resisting the technology, as technology alone will not be able to confront the manifold challenges posed by deepfakes, such as the aspects of attributing liability and providing damages. Therefore, subject-specific legislation is an indispensable requirement to deal with it efficiently.


(Neeraja and Sindhu are currently law undergraduates at School of Law, Christ University, Bengaluru. They can be contacted here and here, respectively.)

Cite as: Neeraja Seshadri and Sindhu A, ‘A Desideratum for Legislative Intervention to Contain Deepfakes’ (The RMLNLU Law Review Blog, 26 August 2020) <https://rmlnlulawreview.com/2020/08/26/a-desideratum-for-legislative-intervention-to-contain-deepfakes/ > date of access. 

 

Leave a Reply