Amid the growing chorus of the influence and dominance of social media companies, the United States has identified six key principles to keep their power in check. The areas range from competition; privacy; youth mental health; misinformation and disinformation; illegal and abusive conduct, including sexual exploitation; and algorithmic discrimination and lack of transparency.
These principles are in line with global scepticism around the influence of social media platforms as countries around the world, including India, look to check the dominance of these platforms.
A move to bring ‘greater accountability’
“Although tech platforms can help keep us connected, create a vibrant marketplace of ideas, and open up new opportunities for bringing products and services to market, they can also divide us and wreak serious real-world harms,” the White House said in a statement, after convening a discussion involving experts and practitioners on the harms that tech platforms could cause.
Calling for clear rules on handling competition issues, it said that a small number of dominant Internet platforms use their power to “exclude market entrants, to engage in rent-seeking, and to gather intimate personal information that they can use for their own advantage”. The White House added should also be clear limits on the ability to collect, use, transfer, and maintain our personal data, including limits on targeted advertising.
A rethink of immunity enjoyed by social media platforms: US vs India
One of the key principles listed by the White House is to remove special protection available to social media platforms under Section 230 of the US’ Communications Decency Act (CDA). This section is similar to Section 79 of India’s Information Technology Act, 2000, (IT Act) which classifies social media platforms as intermediaries and broadly shields them from legal action based on content users post on their platform.
Both these regulations offer social media platforms something called ‘safe harbour’. The idea is that since platforms cannot control at the first instance what users post on their site, they should not be held legally liable for any objectionable content they host as long as they agree to take such content down when flagged by the government or various courts. Since social media platforms are generally understood to be crucial tools of speech, safe harbour is typically seen as a core tenet of enabling freedom of expression on these platforms.
The White House, however, has called for “fundamental reforms” to Section 230 after experts highlighted the “magnitude of illegal and abusive conduct hosted or disseminated by platforms, but for which they are currently shielded from being held liable and lack adequate incentive to reasonably address, such as child sexual exploitation, cyberstalking, and non-consensual distribution of intimate images of adults”.
In February 2021, India had notified extensive changes to its social media regulations in the form of the Information Technology Rules, 2021 (IT Rules) which placed significant due diligence requirements on large social media platforms such as Facebook and Twitter, including appointing key personnel to handle law enforcement requests and user grievances, enabling identification of the first originator of the information on its platform under certain conditions, and deploying technology-based measures on a best-effort basis to identify certain types of content.
Social media companies have objected to some of the provisions in the IT Rules, with WhatsApp also having filed a case against a requirement which mandates it to trace the first originator of a message. One of the reasons that the platform may be required to trace the originator is if a user has shared child sexual abuse material on its platform. WhatsApp has, however, alleged that the requirement will dilute the encryption security on its platform and could compromise personal messages of millions of Indians.
This June, with a view to make the Internet “open, safe and trusted, and accountable”, the IT Ministry proposed further amendments to the IT Rules. One of the most contentious proposals is the creation of government-backed grievance appellate committees which would have the authority to review and revoke content moderation decisions taken by platforms.
India is also working on a complete overhaul of its technology policies and is expected to soon come out with a replacement of its IT Act, 2000, which will look at ensuring net neutrality, data privacy, and algorithmic accountability of social media platforms.