views
The last time you were in a heated debate was probably in the comments section of a Facebook post, or while discussing politics on a family WhatsApp group. The pandemic has heightened human interaction online, pushing the world to work, socialise, drink, protest, and marry digitally. Social media platforms are playing a greater role and are becoming the main stage for individuals to perform, act and express themselves. This freedom has allowed networks and communities on the internet to thrive channeling human creativity and innovation in newer ways.
However, the ability to opine on social media has spiraled into a situation where a vast amount of content consists of hate speech, racist slurs, disinformation and online violence etc. The online world is only a mirror of our world offline. While questioning the negativity prevalent on platforms, we must also deliberate the responsibility of the individual in ensuring that the online world remains safe. The main challenge for regulators has been to identify the actors, enablers and by-standers in the complex interaction that the internet allows for.
The Draft Intermediary (Amended) Guidelines 2018 suggest that the burden of responsibility, regulation and recourse to tackle hate speech and fake news, lies with intermediaries. For starters, the guidelines themselves are unclear on what an intermediary is. A virtual platform on which you exchange messages is an intermediary and so is the neighborhood cyber café - a physical space. The vagueness in the 2018 Guidelines has led to confusion amongst the intermediaries over the nature and extent of their responsibility.
Secondly, is the intermediary rightly positioned to call judgement on what is obscene, incites violence, or is offensive content? Intermediaries cannot become arbiters of free speech. A situation where social media platforms begin regulating speech to an extent where they are forced to monitor what every individual says is a dangerous one.
Section 79 of the Information Technology Act protects intermediaries from being directly liable for the words and actions of a third party that uses their services. However, it is applicable only if and when they remove illegal content upon obtaining ‘actual knowledge’, observe ‘due diligence’ and comply with the rules made by the executive. The 2018 draft guidelines seeks to reverse this and that may lead to over-censorship, and the arbitrary takedown of all content that is being notified by any private person or government authority, upon fear of criminal sanction. The Shreya Singhal judgment in 2015 corrected this problem to a certain extent as the Supreme Court of India provided clarity in reading the ‘actual knowledge’ requirement under Section 79 to mean a judicial order or a notification by the ‘appropriate government’. However, as technology evolves and the nature of our engagement with them change, it is important that platforms are given adequate direction that can inform clear content takedown policies.
Taking down content should be a democratic process, driven by people and fact checking organisations. Rather than threatening platforms with the utmost liability, platforms should be given direction to focus on user education and community sensitization, creating mechanisms for personal accountability making platforms safer.
The need of the hour is to develop an informed approach to dealing with content that is offensive, harmful and incendiary. The responsibility lies in each one of us- on creators to create responsible content, on users to share it and view it responsibly, and on platforms to take effective measures for dealing with content that is misleading. The words of Ludwig Von Mises, a human choice and action theorist, ring truer than ever in this context- “Everyone carries a part of society on his shoulders; no one is relieved of his share of responsibility by others. And no one can find a safe way out for himself if society is sweeping toward destruction. Therefore, everyone, in his own interests, must thrust himself vigorously into the intellectual battle.”
(Kazim Rizvi is a public policy entrepreneur and Founder of The Dialogue, a tech policy think-tank based out of New Delhi, and one of the leading voices on technology policy in India. Ishani Tikku is reading Public Policy at the University of Chicago and collaborated with The Dialogue. The views expressed are personal)
Comments
0 comment