Banning Bits of Cyberspace
The Supreme Court has paved the way for a free and open internet, but the future lies in our hands.
It’s a magical feeling to be holding a stack of papers extolling the virtues of an ‘informed citizenry’ and a ‘marketplace of ideas’, or how ‘liberty of thought is a cardinal value’, only to realise—it’s the law.
As news of the Supreme Court’s decision in Shreya Singhal v. Union of India poured in, ‘Indian cyberspace’ erupted with joy. In a landmark decision, the court struck down Section 66A, upheld Section 69A, and read down Section 79(3) of the Information Technology Act – giving a boost to online speech and expression in India. Even politicians who had earlier supported the provision couldn’t help but join the celebrations. At last count, that’s a party of 250 million, with plenty in queue.
If the internet is truly a different medium of communication, deserving of special laws, as the Supreme Court seems to suggest, it is important to anticipate how they might apply to cyberspace. While the present optimism is well deserved, not much will change if existing laws like the Indian Penal Code are arbitrarily applied to the internet.
Secondly, the striking down of Section 66A gives us a chance to focus on the ‘hard cases’ that will surely emerge in the coming years. We will need precise laws, reasonable standards, and sensible enforcement to tackle them. That will be a challenge, if recent events are anything to go by.
Lastly, internet companies should be proactive and transparent in dealing with unlawful content. The judgement clarifies the substance of the law, but remains silent on important procedural questions. This ambiguity alone could affect how online businesses and citizens will be impacted in the decades to come.
A Revolutionary Medium
The Supreme Court explored an important issue concerning the online medium – should Twitter updates, Facebook photos and YouTube videos be regulated differently from posters and pamphlets?
(Should online content be regulated differently from the real-world?)
The Additional Solicitor General, arguing for the government, submitted an extensive list of features that make the internet truly distinctive – wider reach and impact, privacy and security risks, accessibility, anonymity etc. The Supreme Court, seemingly persuaded by the ASG’s submissions, concluded that:
“…there is an intelligible differentia between speech on the internet and other mediums of communication for which separate offences can certainly be created by legislation.
Creating ‘separate offences’ for the internet is a challenging task. Fortunately, the government appears to have tempered its approach, at least in principle, according to a press release it published soon after the decision was pronounced:
“We will have to understand that we cannot set a different standard of public morality for speech & expression in cyberspace from speech in other mediums and in the public domain”
But even with this assurance, it won’t be long before we find ourselves faced with a complex situation. The real test will come now—after the removal of Section 66A—because it will force use to exercise sound judgement in dealing with the truly hard cases.
Should websites be forced to remove content—say, a visual representation of homosexuality, or a documentary about the Kamasutra—if it is considered ‘objectionable’ by others? Depending on where this question is asked, and to whom, the answer might be quite different.
India got a taste of this recently when the government banned ‘India’s Daughter’, a BBC documentary discussing an incident of rape in Delhi. Calls for a ban were widely criticised, but the government managed to secure an order citing various legal provisions, including Section 66A of the IT Act. Within hours, the documentary was made inaccessible (for users in India only).
(Notice to YouTube’s users in India trying to access the documentary ‘India’s Daughter’)
With documentary film makers having to fight such battles, imagine the plight of artists creating content purely for entertainment.
When ‘All India Bakchod’ staged a comedy show featuring leading film stars, they received a standing ovation. When they released a watered-down version of the show on YouTube, they received copies of FIR’s instead.
If the same jokes can survive the scrutiny of a 4,000 strong audience, only to fail the test of ‘public morality’ in front of a ruthless online audience, at least 2,000 times in size, any attempt at censorship should be strongly resisted (to be clear, AIB pulled down the video voluntarily). As comedy evolves into an acceptable form of entertainment, infused with intelligent social commentary (think John Oliver), it will be interesting to see how the government, police and courts respond to similar incidents in the future.
The hardest cases don’t involve dramatic retellings or edgy jokes; just raw data from real life. Take for example the case of 22-year old Elliot Rodger, who killed six people and stabbed three others in a violent shooting spree last May. As news of the killings spread, users stumbled on a series of videos that Rodger had posted on his YouTube page. In them, he talks self-deprecatorily about his virginity, and reveals plans to avenge the women who ‘rejected him’—a sign of things to come.
Rodger’s videos will upset many, and might even inspire someone to commit a similar crime. But simply flushing out virtual artefacts from cyberspace could hurt us in the long run. Instead, an honest and open debate about the incident could trigger a discussion about depression, misogyny and privilege—issues that often lead to such violence in the first place. A similar argument might be made about videos depicting the beheading of James Foley, which were banned from some online platforms.
In Shreya Singhal, the Supreme Court sought to create a legal regime in which an online platform would not have to ‘apply its own mind’ in dealing with such situations, something courts have struggled with in the past. Internet companies are no strangers to this problem. In 2011, twenty-two websites were taken to court for failing to remove ‘objectionable’ content (allegedly cartoons featuring high-ranking politicians).
To remedy this, the Supreme Court read down Section 79(3) of the IT Act, which exempts intermediaries from liability for third-party content, to clarify that ‘actual knowledge’ means knowledge of unlawful content based on a court order. Further, Section 79(3) is circumscribed by Article 19(2) of the Constitution, implying that intermediaries are only required to comply with certain types of takedown requests.
Despite the Supreme Court’s good intentions, intermediaries will be forced to apply their mind in order to avoid liability. Policing ‘objectionable’ content, minimising hate speech, or approving ‘right to be forgotten’ requests in the future will require the exercise of independent skill and judgement. After all, YouTube temporarily removed Elliot Rodger’s videos for a violation of its ‘Community Guidelines’, and Twitter blocked videos depicting the beheading of James Foley based on internal policies.
One unfortunate consequence of the Supreme Court’s insistence on a judicial order is that it may prove frustrating for ordinary individuals who do not have the ability or desire to approach a court to request removal of content. Online platforms could instead use this opportunity to build a relationship of trust with users, by implementing a simple takedown system that would allow users to directly report instances of bullying, harassment, stalking and other misconduct without the need for judicial intervention—going beyond the call of duty, so to speak. The Copyright Rules provide an excellent model, which can be tweaked both technically and aesthetically to create a platform-specific solution.
Another aspect that deserves greater attention is the lack of transparency concerning blocking orders. By leaving Section 69A untouched, the Supreme Court did not to elaborate on the ‘public’s right to know’, nor did it order the government to publish statistics or transparency reports in this regard. It is unfortunate that until a court or the legislature intervenes on this matter, we will have to rely on the good nature of private corporations to keep us informed.
(Secret blocking orders will continue. Image via Michelangelo Carrieri under a Creative Commons license)
If there is a silver lining, it is that the Supreme Court has directed the government to record its reasons for blocking in writing, so that it can be challenged in a writ petition under Article 226 of the Constitution. But if intermediaries are required to keep blocking orders confidential, and if their contents are never disclosed to the public, can they realistically ever be challenged?
The Way Forward
The Supreme Court’s decision in Shreya Singhal has stirred up an important debate around online speech and expression in India. What is still needed is a framework that encourages sensible enforcement and transparency from the government; clear expectations and takedown procedures from intermediaries; and an appreciation for context and intent by all parties involved.
Human subjectivity lies at the heart of this debate, and because subjectivity offers discretion, it is prone to arbitrariness and abuse. As the government gets ready to draft a new Section 66A, we must ensure that any law regulating online speech and expression is precise, reasonable and objective. Till that happens, it might be wise to exercise some restraint before simply banning bits of information from cyberspace.
- Lessons for India on Data Protection - May 31, 2018
- Connecting the Next Billion - December 31, 2015
- Redrafting the National Encryption Policy - September 25, 2015
- How Internet Services and Telcos are Regulated in India - May 8, 2015
- Why Licensing of Internet Services is a Terrible Idea - May 8, 2015