Editor’s Note: Apar Gupta is a lawyer and the Executive Director of the Internet Freedom Foundation, an Indian digital liberties organisation that seeks to ensure that technology respects our fundamental rights. He has worked extensively with activists and government institutions in public campaigns to advance net neutrality, fight against internet shutdowns and introduce a strong privacy regime in India.
You can listen to the talk here:
Digvijay: We see that there has been a paradigm shift in the intermediary liability (‘IL’) regime, owing majorly to Shreya Singhal v. Union of India (‘Shreya Singhal’), but how has Shreya Singhal fared in terms of its application, because there have been major changes in the IL arena after Shreya Singhal with the government again considering Proactive Filtering in rule 3(9) of the 2018 guidelines as an option?
Apar: To give a little background to the readers, IL may sound like a specialised area of law [which it is] but it affects all of us. Under the regime, online platforms are not made liable for the content which is posted on them by the end-users. For instance, Facebook is not liable for the posts put up by its users or Twitter is not liable for the tweets by its users (if they comply with the lawful takedown of that material). This is the position of law in a lot of jurisdictions including Europe and the US. However, lately, there has been a tremendous amount of concern as such platforms have grown greater in size and power. Also, the proliferation of abuse such as misinformation (which has an impact on the electoral integrity) has raised further concerns. Coming from this background, in India, Section 79 of the Information Technology Act, 2000 (‘IT Act’) broadly puts into the statutory framework— that intermediaries are not liable for the content which is posted on them if they comply with certain due diligence as defined under the Intermediary Liability Rules, 2011 (‘IL Rules’). However, prior to that, Section 79 of the parent legislation was amended in 2008 to make it broader and make the immunities extend not only for offences under the IT Act but for all offences under all laws. Now, these rules by themselves contain the specificity of the compliance necessary by an intermediary. But last year towards December, the Union Government suggested certain changes to it and as stated, these changes may conflict with the ruling in Shreya Singhal. In Shreya Singhal, the court held that large platforms can’t delete content by themselves and should not be made liable for it otherwise there would be a tremendous chilling effect; if an operator of a website or a large platform fears that it can be made liable for content then it would lean towards the side of caution and take down content even which may not be illegal. This may give rise to litigation thereby resulting in a vast amount of speech being broken down proactively. Shreya Singhal also clarified that actual knowledge is necessary by a platform to constitute liability, that is, if a platform gets to know that something is illegal on its platform, it needs to take it down. What constitutes actual knowledge? Actual knowledge and the commencement of liability only happens when it is put to notice by judicial order (or executive order) that something is illegal on one’s website. The determination of illegality, prima facie has to be done by the authority and thereafter the platform has to take it down within the period specified under these rules. However, these rules are now under a process of review by the Ministry of Electronics and IT.
Draft Rule 3(9) is one of the most dangerous bits of suggestions made in terms of changes to the existing IL Rules. As per Internet Freedom Foundation, this would be a sledgehammer to online free speech because it basically places a requirement on platforms to proactively censor the content. This means that the intermediaries are obligated to be aware of the illegal content and thereby is in conflict with the passivity requirement under Shreya Singhal. They will have to use “technology-based automated tools” and appropriate mechanism to takedown and discover content and proactively censor them. In a sense, it would be very difficult for intermediaries like Google and Facebook to act on millions of requests and then judge the legitimacy of such requests as to which requests are legit and which are not. In fact, rather than acting on requests or orders by private people, Google and Facebook would be becoming private judges; they would be implementing automated tools and mechanisms which by themselves have been shown to be faulty and having coding biases. Let’s just take one granular example for instance: nudity.
Nudity on platforms is a huge issue because the community guidelines and terms of service prohibit it. Now, a large amount of content posted online is related to artistic creation, which may have elements of nudity, or even, for instance, health awareness which again has depictions of the human body; in breast cancer awareness or menstrual hygiene awareness. So, I think a lot of this depends on the context which needs to be determined and as much as we may individually say that there is no problem with nudity online by itself, the existing law only penalises obscenity and not nudity.
Things like breast cancer awareness or menstrual hygiene awareness—which has depictions of the human body and shows a certain amount of nudity by itself do not become obscene but will an algorithm be able to judge it? And will we, to the point that the algorithm becomes perfect, be able to suffer this harm to our free expression online? This was a small tangible example as to how Draft Rule 3(9) may result in a vast amount of content being taken down and struck off even from day to day conversation on social media [which forms a large number of conversations Indians are having online].
Digvijay: The notice and takedown method was invalidated in Shreya Singhal and Myspace v Super Cassettes but in Myspace it was again revived in IP cases when the court tried balancing Section 52 of the Copyright Act requirement and Section 79 of the IT Act requirement. How do you think the courts should proceed with this reconciliation?
Apar: Intellectual property, over a large period of time, has adopted a tactic factoring in the long court delays of using interim injunctions to achieve interests of their client [which may not be achieved over a longer term of duration of the case]. Most often, they represent content producers who may suffer continuous injuries if the content is made available online in an unauthorised manner without the permission or licenses from the owners. Quite often they go to court and get interim injunctions or takedown. Now, slowly interim injunctions have had several variations including John Doe orders, including proactive takedown requirements and directions to online platforms as well. Copyright rules do have a specific notice and takedown mechanism enumerated under them. Now we can look at this from a specific IP perspective but in the larger domain of IL, how the academic literature has moved ahead as well as the larger amount of discussion which has happened, it has always treated IP to be a separate and distinct species of IL. This is reflected even in the USA in the DMCA. Even in India, where IL is properly understood to be in the domain of the IT Act, the IT rules and their application in various claims such as obscenity and defamation; IP takedown more often than not has not been such a large area of concern except recently because of a Delhi High Court order by Justice Manmohan. The court allowed to approach the registrar directly against repeat infringers for website blocking. But I think that IP has always been viewed distinctly, which does cause concerns of censorship and also impacts intermediaries but ultimately it is because it is a direction by the court which is much more specific and goes to the intermediary or follows the process under the copyright rules in which the owner has to approach the court after sending notice. Then, in any case, IL as a concern has not viewed IP with the same amount of concern as the other branches of law.
If you look at the rules under the Copyright Act, you need to first send the notice and then approach the court within 21 days. So, IP has always viewed distinctly from the larger domain of IL. In fact, IL Rules have defined and stated that for infringement under IP the regime that is to be followed will be separate.
Digvijay: India has been late to the Internet party but do you feel that we require more time with grappling technological concerns because still there is this tech illiteracy we have seen, the draft data protection bill had it, so do the E-commerce guidelines and the 2018 guidelines? If I were to stretch it, the Supreme Court’s Aadhaar judgment too suffered the same fate; ambiguous/vague terms, undefined words. For example, the term intermediary hasn’t been differentiated into types and is a blanket term in the 2018 guidelines.
Apar: It would be very simplistic to say that our branches are ignorant of what’s happening. They are very well aware. I think there is a lack of expertise which exists all over the world and not only in India.
Even if you look at Aadhaar a bit more closely, the UIDAI was composed only of technocrats and people from engineering backgrounds. So you can easily say that people from those domains were not aware of liberal arts, social sciences or other forms of regulation—this sets up this entire debate of something that always had a sense of friction attached to it. Ultimately, it was posed as a debate between people who wanted privacy and people who wanted greater access to subsidies and entitlements in a more targeted manner to reduce fraud and leakages.
In my opinion, larger policy framing institutions in India require to act in a more coordinated manner. I think the Ministry of Electronics and IT is taking steps towards that, however, I think it can do much better. The Parliament over the last term has been fairly active on issues concerning online abuse, trolling, misinformation, and also the responsibility of platforms, IL. But in my opinion, all government institutions largely lack a deeper understanding of constitutional issues which are much more clearly placed within part III of the Constitution. This is a core deficiency I have noticed.
In fact, when you look at regulations and technology [of course you need to understand the tech bit], it is not something complex enough to make it completely inaccessible to a good judicial determination or a good system drawing up legislation. What, in fact, is missing is a healthy respect for individual freedom and rights. This, I think is of a much higher concern than any form of illiteracy. If you look at the National Judicial Academy, it is holding courses for training of judicial officers in respect of cybercrimes, digital evidence. Most of the judges are now part of this internet ecosystem; they are using smartphone devices, e-courts have been started in many courts of the Delhi HC. I only think that it needs to be carried much further; capacity needs to be increased, co-ordination needs to be increased and also policy framework and how they are developed need to be open to ensure greater participation. And we should all lean towards serving the constitutional goals which are articulated in our fundamental rights.
Digvijay: The government (even if we consider with good intentions) always forgets the limits between good governance and a Big Brother State. Rule 4 of the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules 2009, says that a competent authority may authorise a governmental agency to intercept, monitor or decrypt information in any computer. What do you think of the power assigned to government agencies to intercept, monitor or decrypt information?
We see this again in the draft rule 3 (5) which states that the intermediary shall enable tracing out of the originator of the information on its platform as may be required by government agencies which are legally authorised. Citing national security, the government has already allowed ten central agencies to intercept, monitor and decrypt any information generated from any computer in the country. Would the interests of the government in regulating free speech be the cause of a social discord?
Apar: The core of this question goes back to the 2017 Puttaswamy judgment. It laid down very clear concepts of what privacy means; autonomy of an individual. And dignity was articulated as a touchstone of privacy itself. All of these are under threat when you have no control over your personal information. That also has a chilling effect on what you do, what you say and slowly on what you think. The best narrative is actually presented in Orwell’s 1984. The good way to look about it would be to look not only at court judgment but also the literature. The second thing I would like to address is the requirement of traceability under the draft rule 3(5) which basically means the removal of end-to-end encryption. Essentially, the rules would require each message on WhatsApp to be fingerprinted with a device ID and the device ID would then be fingerprinted with Aadhaar or a driving license to verify that.
A lot of people would say what’s the problem in that. Of course, people should have the responsibility for the messages they are forwarding but they are forgetting that a lot of these conversations are private conversations. This information, especially when it is created digitally is not there for today or tomorrow; it stays there for a longer period of time and can be analysed, processed and used in a lot of ways which can harm a lot of people; ordinary people. Our opinions may be changed, our personal relationship may be exposed and sometimes the white lies that we all say maybe also put up in a grave amount of social dispute. All these factors may lead us to have a conversation in a very carefully considered manner which by itself would result in censorship. It would not make us responsible but it would drive responsibility to the extent where our individual preferences and our individuality would go away. We would stop making jokes, using irony and humour knowing that our messages are being queried and intercepted. This is the first part that traceability does.
The second part is that WhatsApp already has an immense amount of data which is known as metadata. It knows how much time you are spending on it, to whom you are talking to, what’s your display picture, which groups you are party to, how often do you send large media messages; theoretically they can even share location and several other points of information which when put together can give a deep insight to what you are doing and let’s not forget that WhatsApp is only one of the applications in everyone’s smartphone.
The second thing is when your messages are backed up to a cloud from WhatApp [which is the default option for a lot of us]. Let’s say a Google Drive or iCloud depending on the platform you are using. In that case, your messages can be queried by law enforcement because WhatsApp is a zero-knowledge platform as to who sends and creates messages. But when it is backed up that entire cash of messages is available to law enforcement. A lot of information is available about you irrespective of whether a WhatsApp message content itself has been sent in an encrypted manner.
Currently, the Internet Freedom Foundation is a litigant in the SC challenging the digital surveillance architecture. This is a batch of petitions which were filed after the MHA notifications last year. It’s important to remember that these are draft rules, the surveillance mechanism for digital content broadly follows the safeguard which was laid down in 1997 telephone tapping judgment, called PUCL v UOI. At that time, there were analogue phones, hence the amount of information that could be gathered was limited—who was calling whom, at what time, for what duration and what’s the content of the conversation. Now, juxtapose that with what’s happening today where you have your cell phone—an immensely capable device with ‘n’ number of sensors for temperature, location etc. Even in very minute degrees, it can have a pixel tracking image—it can get to know how long your cursor stays at a specific part of the screen. All this information gives a deep psychological insight into a person. Clearly, the telephone tapping standards which were there in 1997 are not the best ones to be applied for digital surveillance as well. We need better safeguards; we need much more protection.
Our basic submission was from the first Puttaswamy judgment. We argued that there has to be a system of proportionality, the state’s interest in security has to be individualised, you cannot have mass surveillance. Secondly, the surveillance system which is into place has to be through a specific legal order which has to come from judicial oversight. The latter argument is based upon the second Puttaswamy judgment in Aadhaar where Justice Sikri says that there should be judicial oversight to any surveillance made. Thirdly, even the orders when they are issued—their substantive ingredient should list what is the duration, how long it is done, but also why is it necessary and why is it being done. Thereafter it should be shared when it is used with the person who was put under surveillance to ensure a further degree of accountability to this entire process. We can’t have a situation today where the people are in fear of their private conversation open to government requests.
Digvijay: Regarding government requests, in light of Shreya Singhal, do you think the EC is a qualified body to issue a takedown notice to Twitter which it recently did for some content on exit polls?
Apar: The EC is a constitutional body and it has evolved a moral code of conduct under its inherent powers. The moral code of conduct and its violations can lead to subsequent registration of FIRs under IPC but over the years the EC and its directions have achieved a level of sanctity. Also, if you go back to Shreya Singhal, it does say judicial or executive orders. However, it is a plain understanding that the media during the elections is regulated by the EC with a view to conducting free and fair elections. Despite what certain spirited commentators have commented and written, I don’t think there is a very deep problem.
I know there is an article in The Wire but I am not very persuaded by that to say that the EC is not the right authority and should not issue takedown notices. In fact, during the elections, restriction of speech is permitted in law usually across all media as the notified period itself is a restrictive period in which certain forms of speech are prohibited to prevent hate speech and misinformation from spreading and the absence of which would make the entire exercise very difficult. Not only does it have a socially justifiable basis, but it also has a long precedent of enforcement. These kinds of jurisdictional issues have always come up with the EC; because the legislation has never clothed the EC with statutory powers more than the bare minimum. It has always been a mix of the inherent constitutional powers of the EC which has been exercised in addition to directions which have been given by courts in PILs. For instance, even the filing of affidavits revealing the criminal antecedents of candidates and their asset disclosures has been done pursuant to court judgments. It all started with the inherent powers of the EC and the directions given in PILs so jurisdiction has been somewhat of an issue but there is a long line of precedents with respect to media restriction and control.
Digvijay: The requirement of traceability in draft rule 3(5) would violate end-to-end encryption. Social media platforms deploy end-to-end encryption to provide security to and respect the privacy of their users. Without engaging technical experts in an open consultative process, or introducing a data protection law or surveillance reform, the same is being done away with by introducing the requirement of traceability. If the latest draft rules go ahead, the consequences of such a move would mean an enormous stretch of control of the government over the citizen’s lives with respect to their right of free expression. To what degree can such an expansive control over personal lives be legitimate? These requirements obviously violate the basic rights of citizens, but how does this violation stand against the stipulated cause for their introduction?
Apar: Practically what is required is that you’ll get a popup every month and an email every month from let’s say Twitter or Facebook saying “you need to behave yourself, these are our community guidelines and if you violate them, we’ll terminate or suspend your account or whatever the penalty”.
At first, it may seem a much-required measure taking into account rampant online abuse and trolling but consider that this also as a change in the environment of Facebook or Twitter. We, ourselves know that we do not engage in trolling but suppose irrespective of doing that there is a mandatory requirement on Twitter or Facebook to send these notices every month. And each service provider—Instagram sends you one, Snapchat sends you one, Twitter sends you one, Facebook sends you one, TikTok sends you one, what does it do to you as a user?
This reminds you that you are back in secondary school, this is somewhat like a nanny who is tutoring you to behave yourself. This requirement will turn internet in India in a very corporal environment which is bad for creativity, free conversation and discourse. It is again a very censorial kind of measure which promotes a kind of good acceptable conduct, also at the other end results in us wearing a straitjacket of what actually constitutes good conduct. Just imagine all of us wearing the same school uniform in real life and stepping out in the real world. That is what this would do. It would just drive a sense of fear into people when they come online and I don’t think that’s the best way to regulate online trolling and attached concerns.
Digvijay: Security experts, according to media reports, have recommended hash value tracking, among other things, to curb the menace of fake news. What other alternatives could rather have been taken to avoid violation of the privacy and security of end-users? What could be the reason that such alternatives have not been pondered upon, the lack of expert consultation, non-existent public debate or ulterior political motives?
Apar: Actually, a lot of work on fake news is being done by Claire Wardle who heads First Draft (Centre at Harvard’s Kennedy School government focusing on digital journalism). David Kanes, the UN Special Rapporteur on the promotion and protection of the right to freedom of speech and expression says that we don’t need criminal penalties to basically deal with misinformation even though the challenges are really high, what we need to do is to double down on deepening institutions which can actually promote the basis of shared social pacts. Shared social pacts are basically factual statements of who did what at which time rather than having an exaggerated sense of this happened and that happened which is clearly open to dispute. Also, there have been recent studies which show that even after debunking of fake news people will continue with whatever suits their ideological biases even though they are clearly told what is a false and correct statement.
So, it’s a complex problem, undermining privacy, fingerprinting each piece of content on the internet which is completely impossible to do or rework the entire architecture of the internet which is not a great way to deal with this issue. Fake news by itself is a spectrum, misinformation/disinformation is a spectrum, propaganda is a spectrum, all these are different forms of inactivity which may be completely innocent, innocuous to some. The basic problem which is arising much more locally in India is that high political functionaries are indulging themselves in it for their own political gains. So, that’s the issue which is occurring more often than not. And I really feel it has put in stress on media, especially newspapers, organisations which are also under a challenge as to credibility.
So, public broadcasting should be funded in a more extended manner, there should be financial support for media to do actual ground-level reporting and of course, fact-checkers should be supported. It should also be considered with the larger set of legal reforms related to India’s content laws which have stuck to colonial age and nobody has looked at them closely enough because these same laws are usually intended to curb hate speech, defamation, obscenity. So, it’s basically these vulnerable individuals, who are raising their voices, checking the narratives of organised, well-funded structures which are putting out daily amount of ideologically suited information irrespective of its factual accuracy, need a greater amount of protection. This also includes press and media broadcasting. This is the first approach I would say should be followed rather than to look at immediately technological solutions given that technology can itself only solve what we fully understand. And what we fully do understand is that when you start inserting hash value to each piece of content that is transmitted over the internet, which is not only a message sent from A to B but any signal that is sent across the internet would result in fundamentally undermining any conception of privacy.
Digvijay: What do you think of the future of IL in India specifically keeping in mind tech illiteracy and the growing concerns against fake news and hate speech?
Apar: The concerns with respect to IL is a topic all over the world. The way to look at it may not be to the lengths where you may end up undermining the protection which is there not only for Google or Facebook but for everyone—every provider who hosts, facilitates the transmission of content on the internet. This includes people who have a hosting platform which hosts websites, blogging platforms, and even platforms such as Wikipedia or GitHub. So you know, something which is aimed towards only a few platforms which are very large conglomerates would end up tremendously damaging both free expression and privacy in line. So other regulatory approaches may be examined.
The US, as well as the EU, are looking much more closely at their competition law frameworks. In the UK, there was a recent white paper which focuses on the duty of care by online platforms without undermining IL protections. So there are different ways to look at it under different laws. I think to basically proceed under the IL framework is going under an understanding; we want to fix the most obvious problems but it should not lead to other complications and problems and should not hurt innovations in the technology sector or lead to companies exiting India itself.
Digvijay: Some advice for students interested in this arena.
Apar: There’s hardly any advice for the students who are already interested in this, I’m sure they’ll find their way. There’s obviously the internet, you learn on the internet and start developing a passion. There are a lot of organisations in India which are working on these issues so it’s easy to follow their work. In addition to that, you can pick up the latest tech writing which is happening. Read complete books and a lot of people are online on social media as well who write on these issues. Interact with them, ask them for advice. There are academic courses here as well as abroad so you can structure your interests much more professionally towards policy jobs a little later on if that interests you. So, I think there’s a lot of opportunity in this sector and there are several ways of deepening your interests, a lot of that will happen by reading, writing tentatively over your law school years and trying to engage with the experts—a lot of them are open to conversations and are happy to explain what they are doing to a larger number of people. So, be perseverant, things will work out hopefully.
We express our sincere gratitude to Mr. Apar Gupta for taking out time from his busy schedule and providing insightful responses to the questions. We also thank Mr. Digvijay S. Chaudhary for his worthy contribution to the BlogTalk.