The Internet has created enormous potential for free, democratic, and open exchanges. Yet, some have abused this incredible tool to propagate harmful and illegal content online. This has resulted in a growing interest in holding big tech platforms accountable for the way in which they moderate content on their services. This may include, but is not limited to, openly sharing information on data collection and access, as well as removal requests. Civil society has been key in putting pressure on big technology companies to be more transparent, which has resulted in the popularity of “transparency reports”.
Transparency is critical for the well-functioning of democratic and safe exchanges, and a requirement for fair processes, so it is not surprising that this is also becoming a centerpiece of upcoming regulations for online platformsAn online platform refers to a digital service that enables interactions between two or more sets of users who are distinct but interdependent and use the service to communicate via the internet. The phrase "online platform" is a broad term used to refer to various internet services such as marketplaces, search engines, social media, etc. In the DSA, online platforms... More.
In 2010, Google became the first internet company to publish a formal transparency report, a practice that became more widely adopted by 2013, amidst growing concerns including about risks of government surveillance. Since then, a number of principles and frameworks – ranging from civil society initiatives to government policies – have been adopted around transparency reporting. Today, most frameworks target transparency about the moderation of Terrorist and Violent Extremist Content (TVEC) or Child Sexual Abuse Material online (CSAM); however, the upcoming regulations, such as the European Digital Services Act, expand transparency reporting to cover all a company’s content moderationReviewing user-generated content to ensure that it complies with a platform’s T&C as well as with legal guidelines. See also: Content Moderator More process.
Overview of voluntary transparency reporting frameworks
The following table provides an overview of the main bodies and principles that guide transparency reporting today.
Title | Scope | What does it entail |
---|---|---|
Global Internet Forum to Counter Terrorism (GIFCT) 2017 | Terrorist and extremist content on online platforms | • Requires its members to produce transparency reports and produces its own transparency reports. • Through a multi-stakeholder approach, it defines the elements of meaningful transparency and holds its member tech companies accountable. |
Tech against terrorism 2017 | Guideline on transparency reporting on online counterterrorism efforts (targeted at Governments & small online service providers) | • Asks governments to detail the processes and systems they use to discover, report, and store terrorist content and activity, and what redress mechanisms they provide. • Provides transparency reporting guidelines for tech companies and advise on community guidelinesA set of rules and restrictions that users are required to adhere to when using a digital service. These guidelines typically outline what is and is not acceptable content or behavior on a given service, as well as the consequences for violating these rules. These terms may also be referred to as content policies or codes of conduct. See also:... More enforcement and methods to increase transparency around content moderation processes. |
Tech against terrorism 2017 | Terrorist and extremist content on online platforms | |
Santa Clara Principles 2018 | Targeted at online service providers that do content moderation | • Recommendations for steps that companies engaged in content moderation should take to provide meaningful due process to impacted stakeholders • Better ensure that the enforcement of their content guidelines is fair, unbiased, proportional, and respectful of users’ rights. • Sets out foundational and operational principles as well as implementation mechanisms. |
EU Code of Conduct on Countering Illegal Hatespeech Online 2019 | Targeted at voluntary industry signatories | • The Code of Conduct was created in 2016 in cooperation with tech companies to respond to xenophobia and racism online. • Signatories commit to proving transparency reports and ensuring removal requests for illegal content are dealt with in less than 24 hours. |
Centre for democracy and technology 2021 | A framework for policymakers | • Focuses on user’s speech, access to information, and privacy from government surveillance. |
OECD Voluntary Transparency Reporting Framework 2022 | Terrorist and violent extremist content (TVEC) on platforms | • A response to the problem of a variety of different frameworks, definitions, and stakes recognized in other transparency reports. • Sets a standard for baseline transparency on TVEC. • Launched a portal for submitting and accessing standardized transparency reports from online services. |
Tech Coalition 2022 | Targeted at voluntary industry signatories for CSAM | • TRUST is a voluntary industry framework for transparency reporting that focuses on child sexual exploitation and abuse online. • It takes into account the variety of digital services in this environment as well as differences in company size and maturity. |
EU Code of Practice on Disinformation 2022 | Targeted at voluntary industry signatories | • Created in 2018 and updated in 2022, this Code addresses disinformationFalse information that is circulated on online platforms to deceive people. It has the potential to cause public harm and is often done for economic or political gain. More, specifically in the context of Covid-19 and the war on Ukraine. • Requests platforms to provide monthly reports on their efforts to promote authoritative data, improve users’ awareness, and limit disinformation and false advertising; it also sets up a Transparency Centre and a Task Force to oversee the implementation of the code and keep it future-proof. |
Regulations on transparency reporting
Aside from frameworks from civil society groups and voluntary codes created in cooperation with the industry, many governments have (or are in the process of) passing laws around online hate speechHate speech is any form of communication, whether written, spoken or otherwise expressed, that attacks or incites violence, discrimination or hostility against a particular individual or group on the basis of their race, ethnicity, nationality, religion, sexual orientation, gender identity, or other characteristics. More that encourage transparency reporting. As mentioned above, the DSA requires all online intermediaries to provide transparency reports, the details of which vary according to the type of service. The Platform Transparency and Accountability Act in the US also aims to address this growing issue and implement transparency legislation. Similarly, the proposed American Digital Services Oversight and Safety Act of 2022 sets out transparency reporting obligations for content moderation.
Implications for online service providers
With the increasing demand for accountability and transparency from online platforms as well as governments, it is not surprising that numerous frameworks for transparency reporting have come up. Despite the variations, at its core, transparency reporting entails a clear and consistent account being kept of requests for removal or restriction of content.
Conclusion
To ensure alignment with industry best practices and compliance with regulatory requirements for transparency, companies will need new processes and tools that are effective at handling and organizing large volumes of content moderation activities, and which are continuously aligned with rapidly evolving expectations and requirements. Concretely, this means having the ability to track all actions taken on user content, all notices coming from every potential source, and even further, track all complaints about any content moderation decisions taken by the online service. Streamlining and unifying these workflows will be crucial for all players to remain compliant and ensure the trust of their users.
To find out more, contact us at info@tremau.com.
Tremau Policy Research Team