What Does the DSA Mean for Your Business?

On November 16th 2022, the Digital Services Act (DSA; Regulation 2022/2065) entered into force. This regulation aims to create a safer digital environment where users, businesses, governments, and civil society can flourish. The DSA will apply to VLOPS and VLOSE by August 2023, and to all online services operating in Europe by February 2024.

Acknowledging the immense benefits the Internet has brought to humanity, the DSA builds on the central principles that what is illegal offline, should be illegal online, and that more should be done to tackle the spread of illegal and harmful content online, such as terrorist content and child sexual abuse material. 

Who does the DSA apply to? 

The DSA applies proportional rules for online intermediary services, depending on the size and type of services an organization offers. The DSA defines four categories of services:

  1. Providers of intermediary services – Examples include internet service providers, content distribution networks, DNS services, VOIP services, web-based messaging and e-mail services, etc.
  2. Providers of hosting services – Examples include webhosting or cloud services.
  3. Online platforms – Examples include social networks, online marketplaces, app stores, online travel and accommodation websites, content sharing websites, etc. 
  4. Very large online platforms (VLOPs) and search engines (VLOSEs) – Any online platform or search engine that has more than 45 million average monthly users in the EU. 

Obligations under the DSA

Additionally, micro and small enterprises* are exempt from obligations for online platforms and VLOPs. However, they will be required to communicate the average number of active monthly users of their service to the competent authorities.

Enforcement and Implementation

Under the DSA, all EU Member States will designate a Digital Services Coordinator (DSC) – to supervise the providers of online services and the enforcement of the regulation. The DSC will have the power to carry out inspections, penalize infringements, impose fines or periodic penalty payments, as well as request the temporary restriction of the service in case of a continued or serious offense. Finally, the Commission will be the exclusive enforcer of systematic obligations on VLOPs and has the power to intervene upon DSC requests. 

Failure to comply with obligations can result in fines of 6% of the annual worldwide turnover of the preceding fiscal year. Furthermore, failure to provide complete and correct information for an inspection can result in fines of 1% of the annual income or worldwide turnover for the provider. 

Key dates for the DSA

Implications for your business

The DSA imposes a number of new obligations for online service providers and introduces hefty fines to ensure compliance. To avoid these, providers of online services must implement a number of operational changes. Most immediately, providers need to declare a single point of contact and legal representative in the EU, who can be held liable for offences. Hosting services, online platforms, and VLOPs also need to ensure they have well-designed and easy-to-use notice-and-action and complaint-handling mechanisms in place, and that they implement the appropriate tools to process law enforcement, trusted flaggers, and out-of-court dispute bodies’ requests

Additionally, online marketplaces will be subject to a specific set of rules that will impact the way they design their platform, show advertisements, and deal with traders and consumers. The protection of minors is also central to the regulation and providers will have to implement child protection measures such as age verification and other risk assessments. 

Concretely, providers of online services will have to adopt a streamlined set of processes that allow for continuous compliance, notably with obligations such as transparency reporting and independent audits

How can Tremau help you?

Tremau’s solution provides a single trust & safety content moderation platform that prioritizes compliance as a service and integrates workflow automation and other AI tools. The platform ensures that providers of online services can respect all DSA requirements while improving their key trust & safety performance metrics, protecting their brands, increasing handling capacity, as well as reducing their administrative and reporting burde

Further resources

To learn more about how the DSA impacts your business, check out a few of our resources on the topic below:

* Micro and small enterprises are those with a staff headcount of less than 50 and a turnover of less than €10 M. They are exempt from certain obligations falling upon online platforms.

Tremau Policy Research Team

Utopia: DSA scope in focus: I have a comments section, does the DSA apply to me?

Does having a comments section qualify my services as a hosting service?

In short: Yes. Having a comments section on your platform qualifies your services, at minima, as a hosting service. More precisely, a hosting service in the meaning of the DSA is a service that stores user generated content (UGC). Perhaps owing to the fact that the definition is rather simple, it also encompasses a whole host of services, which are very different in type, size and business model. The category includes webhosting companies like AWS or GoDaddy, file storage and sharing like Dropbox, marketplaces like Amazon and Alibaba, as well as social media services like Facebook. Because there are no exemptions based on size or type of the service, hosting services also include some services, which may come as a surprise, such as the comments sections of online newspapers and blogs as well as user reviews on e-commerce websites.

Does a comments section also qualify my services as an “online platform”?

Short answer: It depends.

Read the full version of the blog on Utopia.

Marketplaces & the DSA

While the Digital Services Act (DSA) does not make a single mention of the term ‘online marketplace’, do not let this ambiguity in terminology fool you. The DSA includes significant obligations which apply to all online services that allow consumers to connect with traders. Read on to find out what this entails and how to get prepared.

The importance of effective trust & safety policies on marketplaces

Online marketplaces are digital platforms that allow buyers and sellers to transact goods or services with each other, often acting as intermediaries in the transaction. This includes businesses of all shapes and sizes including Amazon, Shopify and eBay Kleinanzeigen. 

The EU, with its 450 million citizens, represents a big market — only taking Germany and France we are speaking of around 68.4 million and 51.5 million e-commerce customers, respectively. Of these, most online shoppers look for previous customers’ reviews, and 97% of online shoppers who read product reviews also check for responses from the seller. Building trust has thus become an essential part of retaining customers. After all, several studies have established that customer retention should be a top priority for businesses, even more so than customer acquisition. According to a survey, existing customers are 50% more inclined to try new products from the same platform.  

The DSA sets out numerous substantive obligations regulating the relationship between an online marketplace and their users. If implemented well, these can lead beyond strict compliance with the law to further keeping users engaged with your services. 

Enter the Digital Services Act: What will you need to do to ensure compliance?

The DSA seeks to promote a fair, transparent, and competitive ecosystem of digital platforms. It therefore retains the important principle laid down more than two decades ago in the   – a platform is not liable for the content of their users, unless they have explicit knowledge of it. This remains the case for marketplaces – they can benefit from this liability exemption as long as they act swiftly to review and remove content and products which are notified and assessed as illegal. 

Overall, the DSA requires marketplaces to comply with the same obligations as any other hosting service, online platform or social media. This means that marketplaces must have effective notice and action mechanisms allowing users to report products and pieces of content that are illegal. Further, marketplaces, as any other hosting service, need to ensure that they inform both the user that reported the product and the seller of the product where any decisions are taken to remove a listing. 

The DSA also requires significant tracking of all content moderation-related data points: all user notices, together with any other tool that is deployed for content moderation under the marketplace’s terms of service will need to appear in a transparency report. Each instance should be quantified and categorised by types of illegal content and source of detection. 

In addition to all the general rules applicable to online platforms, marketplaces have a few additional obligations. The so-called “know your business customer” obligation ensures that platforms know the traders that offer products and services by collecting and storing information which can be provided to authorities. Further, online platforms that act as marketplaces must also ensure that their interfaces are designed in a way that allows sellers to comply with product safety laws and they are obliged to conduct random checks against official databases. 

Perhaps most importantly from the perspective of users and their safety, marketplaces that become aware of illegal products or services being sold must inform the consumers about the seller and about what measures the consumer has available to seek redress. Some studies suggest that informing users of potential risks associated with products can positively impact user retention. A 2022 report found that the majority of shoppers (72%) believe that transparency, which involves providing detailed information about a product such as its ingredients and manufacturing process, is important. Additionally, almost two-thirds of shoppers (64%) stated that they would switch to a different brand that offers more comprehensive product information.

Where a marketplace has 45 million users or above, it will be designated by the Commission as a very large online platform (VLOP). This means that they will additionally have to comply with the extensive set of risk assessment and mitigation obligations for VLOPs, as well as carry out a yearly audit of their services and measures to ensure user safety. 

Rules for marketplaces are changing fast 

The ink on the DSA has yet to completely dry, while already new rules are popping up that specify how certain obligations need to be applied in the context of potentially unsafe products. For example, the EU’s General Product Safety Regulation (GPSR) imposes a three-day deadline for processing user notices related to hazardous products on online marketplaces. It further specifies that random checks have to be performed against the Safety Gate.  

What can Tremau do for you?

If you are a marketplace, you are likely already working hard to ensure buyers and sellers can effectively interact on your platform. However, the DSA obligations are extensive, and their operational impact and implementation solutions require careful assessment. Don’t know where to start or lacking internal resources to go at it alone? Tremau can help – check out our advisory services to know more or reach out to our policy experts.  

Feeling like you have a grasp of the DSA obligations already, but still looking for help to comply with the law? We have a software that could help in increasing handling capacity and improving your effectiveness in key trust and safety metrics. Reach out here for a demo of our content moderation software that will allow you to comply with the law while tracking all relevant metrics for your business. 

The DSA is coming: How will it affect your business in the EU?

What is the DSA?

On November 16th 2022, the Digital Services Act (DSA; Regulation 2022/2065) entered into force. The DSA is a landmark law setting out a set of common rules for digital services across the EU. With the new law in force, companies of all shapes and sizes will be obliged to put in place by 2024 processes and procedures around content moderation to address illegal content and protect user rights.

Who is affected?

The DSA will apply to all intermediary services operating in the European Union, in other words every company hosting or facilitating the transmission of user generated content (UGC) in the EU. This includes, for example, internet service providers, content distribution networks, DNS services, VOIP services, web-based messaging, but also hosting services such as web hosting or cloud storage, as well as online platforms such as social networks, online marketplaces, app stores, online travel and accommodation websites, online gaming and content sharing websites, etc.

The DSA has a global impact as it applies to all companies offering their services in the EU. As such, companies do not need to have any physical preshas-medium-font-sizeence in the Union – if they have any EU users or they market their services in Europe, they need to comply.

Read the full version of this article on the Utopia.

How will the DSA impact online platforms’ policies on minors?

2022 Pew Research Survey found that 95% of teenagers (aged 13-17) use YouTube and 67% use TikTok, with nearly one in three reporting near-constant use. The amount of screen time has also increased in recent years and it hovers around five and a half hours on average. 

With a greater number of underage users and increasing opportunities to create and share content, comes a greater risk of exposure to illegal and harmful content online. The EU’s landmark legislation, the Digital Services Act (DSA), responds to these challenges around child protection and sets out a number of obligations which aim to keep children safe online. 

How will the DSA change platforms’ trust and safety policies related to minors? 

The obligations addressing child protection in the DSA are spread throughout the text. On the basic level, any service provider which is directed at or used by minors and children has to make their terms of service understandable to minors. The most impacted, however, are likely to be online platforms. For example, social media, video sharing services, and many online gaming platforms, need to take measures to ensure a high level of  privacy, safety, and security of minors when considering the design of their platforms.

The broad nature of the new obligation is challenging as it gives little information or detail on what exact measures will achieve compliance and what falls short. Diving into the DSA, there are hints of what compliance could mean — for example, services should ensure that minors can easily access mechanisms referenced in the DSA such as notice and action and complaint mechanisms. They should also take measures to protect minors from content that may impair their physical, mental or moral development and provide tools that enable conditional access to such information.

Will there be guidance on compliant content moderation practices?

There is no obligation on the Commission to publish guidance on how platforms should safeguard their younger user base before the overall compliance deadline in February 2024. However, we can expect some co-regulatory measures to be in development as part of the Better Internet for Kids+ strategy. In the meantime, companies must seek out and apply existing best practices and develop their own measures in order to comply.

Future best practices on keeping children safe online will likely be developed in the risk assessment cycles of very large online platforms as well. Platforms with more than 45 million monthly active users will have to assess systemic risks related to minors and children such as risks of them being exposed to content which may harm their physical or mental health, or promote addictive behavior.

How can Tremau help you?

If you are an online platform, you are likely already working hard to ensure children are protected on your platform. However, whether your existing measures are enough to comply with the new obligations in the DSA needs careful assessment and benchmarking against best practices. 

Tech Policy Press: Knocking at the Door of Transparency: The Digital Services Act and Infrastructure Providers

Given the increasing focus on trust and safety and the responsibilities of actors across the Internet ecosystem, regulation has gradually shifted focus on transparency requirements. What are the processes that must be in place to deal with illegal content while protecting fundamental rights and due process? The Digital Services Act (“DSA”) is quite clear: if a company’s services are to be considered safe and trusted, transparency is non-negotiable.   

If there is one place in the Internet where transparency can provide some much-needed insight regarding content moderation, that would be its infrastructure. The infrastructure of the Internet is a space consisting of various actors who provide everyday services that allow users to have a seamless, reliable, and secure Internet experience; however, it generally attracts little attention because it is obscure and, predominantly, technical. Actors on this level consist of conduit, caching, and hosting services, seen in companies such as Cloudflare, Amazon Web Services, and Google Play Store, to name a few. Their operations are crucial, yet they often seem distanced from the public discourse; they are often considered inaccessible and, occasionally, unaccountable to everyday users.  

The question, therefore, is whether the DSA could help shed some light into the practices of these otherwise invisible actors. Does the DSA manage to create a consistent and predictable environment for infrastructure providers that could help alleviate some of the opaqueness with their content moderation practices? 

Read the full version of this article on the Tech Policy Press.

By: Konstantinos KomaitisLouis-Victor de FranssuAgne Kaarlep

Very Large Online Platforms: So, you have 45 million users – what now?

February 17th marked an important milestone for the EU’s landmark Digital Services Act (DSA) as all online platforms operating in Europe were required to publish their average monthly active users (MAU). Online platforms and search engines whose MAU surpass 45 million users in the EU will now be designated by the Commission as Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs). 

But what does this mean concretely – and what’s next for these companies? 

Four months until D(SA)-Day

So far, about 20 companies have publicly indicated having more than 45 million MAU in the EU.  A number of other platforms have also declared that they do not, at this stage, meet the fateful threshold: this could, however, change for those witnessing important growth in users over the next year, or beyond – pushing them toward the VLOP category. 

Importantly, regardless of how a platform classifies itselfit is the Commission that has the final say on whether it is to be considered as a VLOP. Eventually, a platform could still be designated as a VLOP in the coming months even if it did not meet the threshold today. For example, the Commission may request more information and also designate based on other sources if it has credible data that a service meets the threshold. 

Following designation, which will be officially communicated to the platforms by the Commission, the concerned services will have four months to complete their first risk assessment exercise. Given that the Commission is likely to move fast with designations, the first risk assessments will need to be completed as early as July 2023.

July 2023: The summer of risk assessments

As mentioned, the first DSA obligation VLOPs & VLOSEs will need to comply with is undertaking risk assessments(Article 35). Concretely, all services designated by the Commission are required to assess annually the systemic risks stemming from the design, functioning or use of the platforms, including any actual or foreseeable impact on:

  1. the spread of illegal content;
  2. the exercise of fundamental rights;
  3. democratic processes and civic discourse; and
  4. concerns related to gender-based violence, public health, minors, and serious negative consequences to people’s physical and mental well-being.

Additionally, for all of the above-mentioned risks, platforms need to assess if these are influenced by any intentional manipulations of their service. 

This obligation will present a significant challenge given the novelty of the exercise and the substantial range of risks covered. Some of the risks will be easier to assess, for example, the dissemination of illegal content, where definitions are more readily available – yet still a complex exercise given the differences across jurisdictions. Others, such as the negative effects on fundamental rights, will be far more complicated, given their broad scope (for example risks related to freedom of speech, non-discrimination, or children’s rights). The most challenging category is likely to be the assessment of risks where the effects are evident outside the platform including, for example, impacts on democratic processes or on psychological well-being. 

In short, VLOPs and VLOSEs will need to consider risks observed on the platform, such as the spread of terrorist content or hate speech, as well as risks where the impact is seen outside the platform, such as concerns related to the psychological well-being of users. In practice, this will likely also mean constructing multiple risk scenarios to understand the effects of the interaction between platforms and users.

As for assessing how potential intentional manipulations add to the risks mentioned, the Code of Practice on Disinformation gives some good indication as to what would be expected from VLOPs and VLOSEs while undertaking their DSA risk assessment cycle. 

What happens next: Transparency, audits & access to data 

The risk assessment obligation is only the first step. Once a platform has identified and assessed its systemic risks, it will be required to draw and implement detailed mitigation plans. In addition to this annual self-assessment obligation (risk assessment results and adjustment of mitigation measures ), VLOPs will be required to undergo yearly independent audits of their risk assessment and mitigation measures taken. Where the audit report finds deficiencies, the VLOP (or VLOSE) will have only a month to set out a plan on how to address the gaps identified. Once the audit is completed, platforms will be required to make the audit results public

The verification mechanisms do not stop here – data access provisions in the DSA mean that VLOPs and VLOSEs need to, under specific conditions, provide access to data to regulators as well as third-party researchers, allowing research into the systemic risks impacting the EU. As such, the risk assessments conducted by the platforms internally as well as the mitigating measures drawn are likely to come under significant scrutiny, not only by auditors but also by researchers who may conduct independent risk assessments based on the data they receive access to. 

Below 45 million users but rapidly growing user numbers?

If a platform is on the cusp of reaching 45M MAUs in Europe but is not quite there yet, this is the time to be testing and preparing in-house as well as looking at what the existing VLOPs will do to comply, since best practices are likely to emerge. The flexibility and applicability of the risk assessment framework on a wide variety of services mean that the definition of successful compliance will also evolve. The current VLOPs are likely to set the stage and can be a helpful benchmark for those reaching the threshold at a later time.  

In the meantime, such platforms need to keep in mind that the MAU numbers reporting requirement was not a one-off obligation – it is now a recurring duty. The Commission and national Digital Services Coordinators also reserve the right to ask for updated MAU numbers or for explanations about the underlying calculations at any time. 

How can Tremau help?

If you are a very large online platform or are likely to become one in the near future, you are likely already working on charting the relevant risk areas where an assessment is required under the DSA. However, risk assessments and necessary mitigation measures can be a handful for internal teams to deploy alone. 

Tremau’s expert advisory team can help you carry out these risk assessments as well as provide support to your internal teams. Further, at Tremau we specialize in assessing your existing processes and practices to flag where the current mitigation measures fall short of the DSA requirements and best practices in order to offer best-in-class remediation plans and long-term compliance.

Feel like you need support now? Tremau can help – check out our advisory services to know more.  

Gonzalez, Taamneh, and the Future of Content Moderation

The US “may be about to change the law on this massively complex question about human rights on the Internet through the backdoor”, tweeted Daphne Keller, Platform Regulation Director at the Stanford Cyber Policy Centre, in a thread detailing the Gonzalez and Taamneh cases that will be appearing at the Supreme Court this week. While the aforementioned cases raise questions on platform liability with regards to content they leave up on the platform, recently passed laws in Texas and Florida – which will also be tested in the Supreme Court – limit content platforms can take down. 

These four cases are at the heart of the catch 22 situation online platforms find themselves in: on the one hand there is pressure to remove content to protect user safety, and on the other, to leave content up to protect freedom of speech. At the core of this debate is whether online platforms can be held liable for the speech they host, and its outcome has the potential to completely transform the future of the tech industry. 

Platform liability in the US – Section 230 in a nutshell 

Section 230 of the Communications Decency Act (1996) – 26 words that set the stage for the internet as we know it today – shields online platforms from liability for content posted by their users. More than two decades after its publication, it remains hotly debated with some arguing it provides too much protection for online platforms, while others state that this section is crucial to maintain freedom and diversity on the internet. Despite many attempts, there has been limited success in Congress to introduce substantive changes to the law. The Supreme Court is therefore in particularly challenging territory – they have to rule on an issue where law makers have not been able to agree on for decades.

What are the Supreme Court hearings about?

The Gonzalez v. Google LLC case involves a dispute between the family of a victim of the Paris terror attacks from 2015, and Google, over YouTube’s recommendations of terrorist content. Similarly, Twitter Inc. v. Taamneh follows the 2017 terrorist attack in an Istanbul nightclub, where the relatives of the victim have accused Twitter, Facebook, and Google for aiding and abetting the attack by enabling the dissemination of terrorist content. As both these cases consider whether the platform can be held responsible for content it hosts, they open Section 230 to potential modifications.

Defending the current liability protection, Google has argued that Section 230 promotes free expression online and empowers websites to create their own moderation rules to make the internet a safer place. While this law has so far protected platforms when it comes to content their users post, the primary question in this case is whether Section 230 also protects the platforms’ recommendation algorithms – a feature that is crucial to many platforms’ architectures today, and for some, like Tiktok, the recommendation is the service. 

On the other hand, in the Taamneh hearing, the courts will set aside Section 230 to discuss whether a platform can be charged with aiding and abetting terrorism if the service was not directly employed for the attack. In a previous hearing, the 9th Circuit ruled that indeed they can be held responsible; however, as the court did not consider Section 230, the platforms remained protected under it. Depending on whether the Supreme Court weakens the general liability protection with the Gonzalez case, it could create a significant problem for platforms as they could all be held liable for aiding and abetting terrorism.

How are the Texas and Florida laws impacting online platforms?

Both states have recently tried to pass laws that make it illegal for online platforms to moderate content or restrict users in many cases. For both laws, petitions are pending in front of the Supreme Court, that has decided not to take them up this year. These laws add to the tensions around regulation in the online space and the potential rulings of the Gonzalez and Taamneh cases. While the latter two urge platforms to do more to moderate certain content on their services – to the extent of holding them liable for promoting and/or hosting such content – the state laws argue that content should not be moderated under provisions of free speech.

Notably, in the case of the Texas law, House Bill 20 forbids large social media platforms from moderation based on the “viewpoint of the speaker” – in this case, ‘lawful but awful’ content would be required to stay up as long as it is not illegal. In a panel organised by the Stanford Cyber Policy Centre on February 17th, speakers highlighted that this could pose specific risks to children. For example, content promoting eating disorders and self-harm would be required to stay up, if content discouraging the same was also up, as both could be drawn to speaker viewpoints.

To remove or not to remove?

These contradictory laws and decisions promise to transform content moderation on online platforms as it exists today. At its core, while the state laws mandate that platforms do not remove certain content and users, the Supreme Court cases could change Section 230 and make platforms liable for the content they recommend or fail to remove. This conflict could seemingly be resolved with the upcoming hearings, or alternatively, open up a Pandora’s box of tech regulation problems. Ultimately, the decisions in the upcoming days will impact not just the online ecosystem, but also the principles that govern it. 

How can Tremau help you?

Whatever the decision of the hearings may be, one thing is certain – it has the potential to impact all online platforms and their content moderation processes. Would you like to know more about how these rulings may impact your business? Reach out to our tech experts on info@tremau.com.

Content Moderation: Key Practices & Challenges

Content moderation has become increasingly important for online platforms to protect their users from potential abuses. The evolving regulatory landscape has also put growing responsibilities on the way user-generated content should be moderated online. Notably, the upcoming Digital Services Act (DSA), which affects almost every online service provider active in the EU, will bring unprecedented obligations to online services in a wide range of sectors, as well as considerable penalties for those who fail to meet the new requirements (up to 6% of annual global turnover). 

Similar regulations are under development in multiple jurisdictions around the world (Australia, Canada, UK, and South Korea – to name a few). Thus, designing and implementing a strategy for content moderation is vital not only for contributing to online trust & safety and ensuring the retention and satisfaction of the platforms’ users, but also for a company’s ability to do business in the markets where regulations are being developed. A company’s success will largely be determined by the degree to which it has managed to ingrain the new content moderation requirements into its business model.

Content moderation practices 

To understand the challenges in achieving efficient and effective content moderation, Tremau interviewed content moderators and managers working in the Trust & Safety departments across more than 30 companies, ranging from mega platforms to early-stage start-ups. Notwithstanding the different types of content that moderators are exposed to given the diversity of online platforms, we have identified a set of common important practices adopted by companies and clear areas for improvement. Three major sections identified include: detection of harmful or illegal content, moderation process and controls, and crisis management. 

I. Detection of harmful or illegal content

A major challenge in content moderation is the tremendous volume of content produced in real-time. In order to accurately identify the very small proportion of potentially problematic content from the rest, companies often use a mixture of re-active moderation (answering user reports) and pro-active moderation (automated detection tools). Based on pre-determined rules or machine learning detection models, AI-empowered automated detection usually selects content that is potentially illegal, such as terrorist content or counterfeit products, or content that clearly violates a company’s terms of service. Many companies also employ automated tools as a preliminary filter, and based on the confidence threshold in the detection, a human moderator is introduced in the process to verify results.  

Despite the improved efficiency brought by automated detection, the overwhelming majority of our interviewees have pointed out that the room for improvement is still large. One frequently mentioned drawback is the difficulty in treating nuanced cases, which makes a human moderator’s job indispensable. Moreover, no AI tool can be a perfect substitute for human intervention in this job given the continuously evolving and highly diverse culture and requirements. Thus, automated content moderation tools should not be built upon the principle of replacing human moderators, but of working with them.  

II. Moderation process & controls

A common issue with content moderation systems is that companies typically have to continuously fill the gap between their existing workflows and the evolving regulatory obligations – often by frequently “patching” their moderation systems. Thus, a much-needed capability is to build content moderator-centric systems according to the company’s evolving regulatory obligations, allowing better coordination among different teams and a more effective and efficient moderation strategy.

i. Multi-level moderation process

Violations of content policies are often categorized into pre-defined groups such as violence, foul language, and extremis. However, moderators can often find themselves reviewing much more nuanced, complex or context-sensitive cases. A key practice adopted by various companies is to establish multi-level moderation teams & processes. In this structure, frontline moderators are responsible for making a “Yes or No” decision for the most clear-cut cases, and send more complicated cases to higher level moderators who have more experience as well as access to more information. In rare situations of very difficult cases, senior Trust & Safety managers or other departments concerned discuss and make the final solution.

ii. Moderation decision trees

Another practice to support moderation decision-making for frontline workers is to use a decision tree during the moderation process, a practice that has been widely adopted by customer support departments and other call centers. By decomposing a complex moderation question into an array of smaller and easier options, a decision tree allows moderators to judge cases in a more structured and standardized manner, which can boost the efficiency and quality of the overall process.

iii. Quality Assurance

Accuracy and consistency of content moderation are also key concerns. Companies develop both ex-ante and ex-post control measures to improve the quality of content moderation. Intensive training before starting as a moderator is commonly seen across companies, and regular training sessions also take place in many companies to keep moderators tuned in with the latest regulatory or terms of service updates. 

Considering the constantly evolving regulations, at both national and international levels, companies often draft extensive and detailed guidelines for moderators to refer to before reaching a decision. Reviewing the accuracy of past moderation decisions on a regular basis is also widely adopted by companies. Often a random sample of the total cases treated by a moderator in any given period will be pulled from stored data and sent for examination, or some cases may be given to multiple moderators to examine their consistency; the calculated accuracy rate is often a key component of the moderators’ KPI. 

III. Crisis management 

Another key challenge during the moderation process is that content moderators’ tasks involve much more than simply judging whether a post should be removed or not. For example, crisis management is also part of their job when, for example, they encounter urgent cases, such as a livestream of self-harm or terrorist attack such as the livestreaming of Buffalo shooting. Such cases demand immediate outreach to law enforcement departments or other appropriate local authorities and should be considered as the digital “first aid” of our time.

Content moderators also need to provide some degree of customer support, as users may file complaints against certain moderation decisions – hence moderators must also be enabled to easily retrieve all relevant information of past cases or of users to better communicate with them.

Toward a better design of content moderation 

Although content moderation is essential for almost every online platform that hosts regular interactions among users, most companies usually do not have enough resources to build or, often more challenging, to maintain and keep up-to-date, efficient and effective internal moderation systems. On this note, Tremau’s conversations with content moderators enabled us to identify a number of recommendations to create an efficient and consistent content moderation processes.

For example, given the multi-faceted nature of content moderation, the most efficient approach to enhancing content moderation processes is to integrate related functions and controls into a more moderator-centric centralized system, which enables the moderators to avoid constantly shifting between tools, ensuring a smoother workflow, important efficiency gains and more accurate KPIs and quality control. 

A centralized system also allows data to be reconciled in a unified platform, thereby giving moderators the complete context needed to make decisions and enabling automated transparency reporting. It also facilitates a risk-based approach via prioritization, which allows moderators to treat cases more effectively and enables the implementation of convenient contact channels with authorities and other stakeholders in case of emergencies. Such rapid reaction mechanisms are still not mature enough in many companies. 

With access to more efficient processes as well as analytics, it then becomes possible to also better protect moderators’ wellness against traumatizing content. 

What does this mean for your business?

To meet the challenges of protecting their users & complying with regulations that are continuously evolving, a number of online platforms will need to enhance their content moderation processes and controls. The measures discussed above streamline the moderation processes to be more efficient, and – with appropriate structuring of data – can automate transparency reporting, which is increasingly in demand across voluntary codes and regulations. 

With regulations such as the Terrorist Content Online Regulation, which sets a 1-hour limit for online services to remove Terrorist and Violent Extremist Content (TVEC) from their platforms, there also needs to be further investments into reliable mechanisms to prioritize content in moderation queues. Thus, “Compliance by Design” will become a necessary focus for building effective and future-proof content moderation systems. Successfully building these capabilities will soon become a key differentiator, and even a critical factor, for survival. 

How can Tremau help you?

Tremau’s solution provides a single trust & safety content moderation platform that prioritizes compliance as a service and integrates workflow automation and other AI tools. The platform ensures that providers of online services can respect all DSA requirements while improving their key trust & safety performance metrics, protecting their brands, increasing handling capacity, as well as reducing their administrative and reporting burden.

We would like to thank all the content moderators & managers who took the time to talk to us and contributed to our findings.

Tremau Policy Research Team

How can you make content moderation processes more efficient?

The growing regulatory spotlight on content moderation, shorter deadlines for content removal, growth of detection of potentially illegal or harmful content to be reviewed, and pressing needs to protect both the safety and freedom of expression of users, has increased the urgency to enhance existing online moderation practices. With these practices becoming widespread, it is important to ensure that this process is effective, efficient, of high quality, and that it keeps the best interests of all stakeholders at heart. 

Content moderation processes

To achieve this, let us look at three key points in the process that can be optimized going forward:

I. Management of user reports

Receiving continuous alerts from users can be overwhelming for human moderators, especially over extended periods of time. At this junction, it is crucial to prioritize and manage alerts – rather than follow, for example, a “first-in-first-out” or other sub-optimal approach. A solution for this is to ensure that user reports are labeled according to the level of harm they could cause (following a risk-based approach) and based on statistical analysis of the available metadata. This is important for user safety – especially in cases of emergency – as it allows cases that are time sensitive to be dealt with quickly. It can also be beneficial for moderator safety as they are warned that they will be viewing more harmful or less harmful content. A lesser considered point when discussing management of user reports is the moderators’ experience of the process itself. An optimized moderator screen can save decision making time and increase overall process efficiency by more than 20%. 

II. End-to-end moderation and complaint handling 

Another pain point in content moderation is managing the process across a variety of platforms, people, and teams. As regulations demand increasing responsiveness and complaint handling from online services, it is important to ensure that you have the right mechanisms in place for end-to-end moderation and complaint handling that is helping build user trust and protect your brand. For instance, a moderation case cannot close immediately once it has been handled after a very first notice. This is because, under the Digital Services Act (DSA), a user can still contest – for at least 6 months – the handling of the case and even take the complaint to an out-of-court dispute settler. Content moderation teams will thus need to account for the possibility of the case continuing beyond the initial handling. This includes making sure that the complaints are uniquely identifiable to streamline this process and that all relevant information is easily available to ensure process quality. 

III. Upstream structuring of data for transparency reporting

The third point to consider is growing transparency reporting requirements. Over recent years, calls for transparency reports from online services have come from civil society and governments alike. This has led to a variety of different frameworks from private actors in the ecosystem and resulted in transparency reporting becoming a key part of digital legislation, as seen in the DSA. Transparency is critical to ensure the safe and fair moderation of online platforms. To produce comprehensive transparency reports, it is crucial to keep a clear and consistent account of all requests for removal or restriction of content. To do this, the tools used by the moderators need to be effective at managing large volumes of notices as well as streamlining storage and labelling of data. 

What does this mean for you?

Optimizing your content moderation processes will allow you to be more efficient with your costs as well as more effective in protecting your users, moderators, and brand. To achieve this, it is important to introduce new processes, incorporate automation and intelligence to improve speed and quality, and build moderator-centric tools. More importantly, it is critical to prioritize quality assurance to ensure that the right balance between safety and freedom of expression online is met. 

With regards to regulation, the DSA states that following a user report the company is liable for the effect given to it. Poor content moderation can raise reputation, regulatory, and other business risks which can also lead to loss of users and market share, as well as significant fines (up to 6% of global annual turnover). Thus, adopting a content moderation system that meets technical compliance requirements from the get-go, as well as prioritizes human safety and quality, is crucial. 

How can Tremau help you?

The Tremau tool is a single end-to-end content moderation platform designed to help you streamline your processes, automating them whenever possible, managing and prioritizing different reported content (whatever the source of detections), as well as continuously producing audit trails for transparency reporting – enabling you to cut costs and collaborate more effectively. The end-to-end process on a single platform allows all team members to see the progression of cases and ensure better communication, faster treatment, higher consistency and quality, and fewer bottlenecks in internal handling – while ensuring the privacy of its users. 

The tool is also created to ensure smooth experiences for moderators. This is done through limiting the number of clicks and screen changes as well as including API connections to external stakeholders to ensure rapid contact. Finally, the tool collects and analyzes data throughout the end-to-end moderation process to ensure that nothing falls through the cracks and absolute transparency can be maintained. Such improvements enable platforms to increase reaction times towards removing or restricting content, thus ultimately protecting users and society. Moreover, it keeps the well-being and retention of moderators at their core by taking steps towards ensuring that their exposure to harmful content is limited and their tasks are streamlined.

To learn more about how Tremau can help you, contact us at info@tremau.com.

Tremau Policy Research Team

Join our community

Stay ahead of the curve – sign up to receive the latest policy and tech advice impacting your business.