Gonzalez, Taamneh, and the Future of Content Moderation

The US “may be about to change the law on this massively complex question about human rights on the Internet through the backdoor”, tweeted Daphne Keller, Platform Regulation Director at the Stanford Cyber Policy Centre, in a thread detailing the Gonzalez and Taamneh cases that will be appearing at the Supreme Court this week. While the aforementioned cases raise questions on platform liability with regards to content they leave up on the platform, recently passed laws in Texas and Florida – which will also be tested in the Supreme Court – limit content platforms can take down. 

These four cases are at the heart of the catch 22 situation online platforms find themselves in: on the one hand there is pressure to remove content to protect user safety, and on the other, to leave content up to protect freedom of speech. At the core of this debate is whether online platforms can be held liable for the speech they host, and its outcome has the potential to completely transform the future of the tech industry. 

Platform liability in the US – Section 230 in a nutshell 

Section 230 of the Communications Decency Act (1996) – 26 words that set the stage for the internet as we know it today – shields online platforms from liability for content posted by their users. More than two decades after its publication, it remains hotly debated with some arguing it provides too much protection for online platforms, while others state that this section is crucial to maintain freedom and diversity on the internet. Despite many attempts, there has been limited success in Congress to introduce substantive changes to the law. The Supreme Court is therefore in particularly challenging territory – they have to rule on an issue where law makers have not been able to agree on for decades.

What are the Supreme Court hearings about?

The Gonzalez v. Google LLC case involves a dispute between the family of a victim of the Paris terror attacks from 2015, and Google, over YouTube’s recommendations of terrorist content. Similarly, Twitter Inc. v. Taamneh follows the 2017 terrorist attack in an Istanbul nightclub, where the relatives of the victim have accused Twitter, Facebook, and Google for aiding and abetting the attack by enabling the dissemination of terrorist content. As both these cases consider whether the platform can be held responsible for content it hosts, they open Section 230 to potential modifications.

Defending the current liability protection, Google has argued that Section 230 promotes free expression online and empowers websites to create their own moderation rules to make the internet a safer place. While this law has so far protected platforms when it comes to content their users post, the primary question in this case is whether Section 230 also protects the platforms’ recommendation algorithms – a feature that is crucial to many platforms’ architectures today, and for some, like Tiktok, the recommendation is the service. 

On the other hand, in the Taamneh hearing, the courts will set aside Section 230 to discuss whether a platform can be charged with aiding and abetting terrorism if the service was not directly employed for the attack. In a previous hearing, the 9th Circuit ruled that indeed they can be held responsible; however, as the court did not consider Section 230, the platforms remained protected under it. Depending on whether the Supreme Court weakens the general liability protection with the Gonzalez case, it could create a significant problem for platforms as they could all be held liable for aiding and abetting terrorism.

How are the Texas and Florida laws impacting online platforms?

Both states have recently tried to pass laws that make it illegal for online platforms to moderate content or restrict users in many cases. For both laws, petitions are pending in front of the Supreme Court, that has decided not to take them up this year. These laws add to the tensions around regulation in the online space and the potential rulings of the Gonzalez and Taamneh cases. While the latter two urge platforms to do more to moderate certain content on their services – to the extent of holding them liable for promoting and/or hosting such content – the state laws argue that content should not be moderated under provisions of free speech.

Notably, in the case of the Texas law, House Bill 20 forbids large social media platforms from moderation based on the “viewpoint of the speaker” – in this case, ‘lawful but awful’ content would be required to stay up as long as it is not illegal. In a panel organised by the Stanford Cyber Policy Centre on February 17th, speakers highlighted that this could pose specific risks to children. For example, content promoting eating disorders and self-harm would be required to stay up, if content discouraging the same was also up, as both could be drawn to speaker viewpoints.

To remove or not to remove?

These contradictory laws and decisions promise to transform content moderation on online platforms as it exists today. At its core, while the state laws mandate that platforms do not remove certain content and users, the Supreme Court cases could change Section 230 and make platforms liable for the content they recommend or fail to remove. This conflict could seemingly be resolved with the upcoming hearings, or alternatively, open up a Pandora’s box of tech regulation problems. Ultimately, the decisions in the upcoming days will impact not just the online ecosystem, but also the principles that govern it. 

How can Tremau help you?

Whatever the decision of the hearings may be, one thing is certain – it has the potential to impact all online platforms and their content moderation processes. Would you like to know more about how these rulings may impact your business? Reach out to our tech experts on info@tremau.com.

Regulating Online Matchmaking: Trends & Challenges

Online dating platforms have exploded in popularity over the past decade with their combined global user bases topping 323 million and earning the industry $5.61 billion in 2021. However, the exponential growth of internet users has led to several enduring problems with creating an accessible virtual dating space where everyone feels safe and included. With a projected 15% increase in the industry’s usership by 2027, it is becoming a critical business priority of these platforms to invest in solutions in content moderation and user security, trust and well-being.

Challenges faced by matchmaking sites

Online harassment remains a persistent problem on social media platforms, and dating sites are no exception. Women in particular face frequent instances of virtual stalking, aggression, and threats of violence, as well as unsolicited explicit images – a phenomenon particularly unique to dating apps. Around 52% of women aged 18-35 reported having been sent unsolicited explicit images from new matches, and another 20% reported having been subjected to threats of physical violence. 

Even more concerning is research published in 2019 that found that no free-to-use dating platform screens their users for prior sexual offences, allowing predators to use the platform anonymously. Due to a lack of effective moderation, people have to decide whether being subjected to harassment is a price worth paying in order to participate or remain on these platforms. 

Racial prejudice also remains an issue for many individuals online, despite the rise of more inclusive and accessible dating sites. A 2018 study done by OkCupid found that Black women and Asian men were the least likely groups to receive messages or responses, while both white men and women tend to be reluctant to date other ethnicities. This problem is exacerbated within the gay community, where dating apps have identified pervasive issues with racial discrimination. 

Another hurdle for online platforms is the question of privacy and personal data. To keep their services free, many websites and social media companies sell their users’ data to third-parties for targeted advertisements. The extent of this was not well understood until, in 2019, the Norwegian Consumer Council discovered that the many popular dating apps collect and sell information such as the user’s exact location, sexual orientation, religious and political beliefs, and even drug use, and medical conditions. This set off alarm bells for consumers and regulators alike who began investigating ways to curtail what information companies could freely transmit to outsiders.

Companies have been working on how to solve these issues internally. Tinder, for example, in 2020 rolled out new features aimed at ensuring user safety when meeting matches for the first time, including an emergency responder-activated “Panic Button”, in-app safety check-ins during a date, and real-time photo verification to prevent catfishing (impersonating someone else online). Bumble made headlines this year when it released the Private Detector, an open-source A.I. software that detects and automatically blurs explicit images sent within the app. Other apps opted to remove the ability for users to sort profiles based on race, however the efficacy of this action is still debated.

Future trends in e-dating

As consumers demand more accountability from companies to make online dating a more inclusive and secure space, national governments are taking note and passing legislation to rein in these actors.

The UK has published a draft Online Safety Bill which includes a wave of regulations for social media platforms, including making companies liable to respond to reports of abuse or harassment. The law will also make “cyberflashing” – sending unsolicited explicit images – a criminal offence. In fact, lobbies for cyberflashing laws by companies like Bumble have successfully pushed through similar bills in Texas, Virginia, and most recently California.

Similarly, in Europe, the Digital Services Act (DSA), which will be live from mid-November, aims to better protect users, establish clear frameworks of accountability for platforms, and foster competition. As long as a dating site has users in an EU Member State, they will face a bulk of the obligations the regulation mandates. See what exactly the DSA means for your business here.

Judging by the trend of recent regulations, it is certain that governments around the world will continue to focus on user-oriented regulations of online companies, so it is imperative that dating apps move quickly to keep up. Not complying with the DSA may result in fines of up to 6% of the platform’s global annual turnover, or even the termination of the platform’s services in the EU. 

Implications for your business

The EU alone represents a large portion of these platforms’ user base, meaning providers will need to ensure they make several immediate operational changes in order to meet new rules and avoid hefty penalties. 

Firstly, dating platforms will need to declare a single point of contact in the EU that can be held legally accountable for infractions of the DSA. Dating service providers will then need to ensure they have implemented a well-designed, transparent, content moderation system that provides the tools for users and the platform alike to adequately respond to law enforcement, trusted flaggers, and out-of-court dispute requests. 

Another major hurdle for companies will be a range of stipulations as to the design of the platform itself. Indeed, the new due diligence obligations for very large online platforms (VLOPs) will impact the way dating sites allow user interaction, share content, show advertisements, and more. The DSA also places a priority on protection of minors, emphasising preventative risk assessments that, in the case of dating sites, would include clearly laying out the company’s procedure to ensure age verification prevents minors from using the service. 

In short, all online platforms and service providers will be required to adopt a robust streamlined approach to content moderation and user safety that is guaranteed through continuous compliance and transparency reporting.

How can Tremau help you?

Time is short for companies to get their houses in order in the face of the recently adopted DSA. To help your platforms, Tremau offers a comprehensive single trust & safety content moderation platform that prioritises compliance as a service by integrating workflow automation amongst other AI tools. Tremau’s platform ensures that e-dating providers and other VLOPs (very large online platforms) are up to standard for the DSA requirements while also improving their key trust & safety performance metrics. This way brands can have the peace of mind of protecting their users and of being protected themselves, and also increase their handling capacity, while reducing the growing administrative and reporting burden of content moderation.

For further information on these regulations and how they can affect your business, please contact info@tremau.com.

Tremau Policy Research Team

What does the Copyright Directive mean for you?

In 2019, the EU published two Directives to set out copyright standards across the European Union and encourage the development of educational, cultural, and publishing sectors: Directive (EU) 2019/789 (Referred to as Directive 789 here) and Directive (EU) 2019/790 (Referred to as Directive 790 here). These two directives have had a significant impact on online services, requiring that they now also moderate content on their platforms for copyright infringements. Unlike “regulations” that need to be applied in their entirety across the EU, a “directive” sets out a goal that all EU countries must achieve by adapting the directive to its national law. As such, it is up to the Member States to devise laws and reach the obligations of the directives.

Overview of the Directives

The updated rules on copyright aim to achieve a balance of interests amongst various stakeholders across the EU Member States: creators, authors, publishers, researchers, online service providers, general Internet users, and the broader public. Directive 789 aims to facilitate the cross-border transmission of TV and radio programs in the EU, and to simplify rules on copyrights and related rights for online retransmission of those programs. Compared to Directive 789, which targets on-demand audio/video platforms and other broadcasters, Directive 790 pays more attention to online service providers that store and disseminate large volumes of user-generated content. 

Key points of the Directives

Directive 789

Country of origin principle• To provide programs across borders, broadcasters only need to obtain authorization from right holders for the country where they have their main establishment. 
Direct injection principle• Refers to a technique by which a broadcaster transmits to distributors via a private line and the distributor then offers the relevant programs to the public.
• In this case, both the broadcaster and the distributor need to obtain authorization for their participation in communicating with the public.
Mandatory collective management of online retransmission rights• This makes it easier to obtain authorization from copyright holders as the right to provide or deny authorization has to be done through a collective management organization.

Directive 790

Copyright exceptions for public interests• This makes it easier to use protected material for different purposes by introducing exceptions to copyright in cases such as: use of works for teaching, text and data mining, and preservation of cultural heritage.
• Anyone can use and share copies of works of art in the public domain, without restrictions.
Sharing restrictions of copyright-protected content on online platforms• The online content-sharing platform needs to obtain authorization from right-holders to make protected works on the platform available to the public.
• If a licensing agreement is not reached, the platform needs to make “best efforts” to ensure that unauthorized content is not available on the platform, and to remove expeditiously any unauthorized content.
Protection for online press publications• EU-based press publishers and authors are granted new rights for digital use of their publications by online service providers.
• Acts of hyperlinking and very short extracts are not copyright-protected.
Renumeration for creators, authors, and performers• States to ensure that creators receive fair remuneration for transferring or licensing their rights for exploitation to another party.
• New rules allow creators to modify their initial contracts if the original remuneration is unreasonably low compared to revenues generated from the exploitation of their works.
Transparency and revocation• Creators should receive, at least annually, up-to-date and comprehensive information about the exploitation of their works.
• Creators have a right of revocation in the event of non-use of their work.
Individuals are not targets of the new rules• Internet users can continue to share content on social media and websites without copyright restrictions.
• Based on the principle of freedom of expression, any use of existing works for purposes of quotation, criticism, review, caricature, parody, and pastiche are explicitly allowed.
• Platforms should establish internal-complaint mechanisms that allow users to appeal against erroneously removed content and should restore content swiftly if the removed content is authorized on the platform.

What do these Directives mean for your business? 

Along with the recently published Digital Services Act (DSA), the two copyright directives result in a series of operational implications for your business. Firstly, providers will have to ensure that they have a process in place to swiftly gain authorization for any protected work uploaded on their platform. In parallel, they will have to ensure that there is an effective mechanism in place to take down (i.e., moderate) content that has not received authorization, and restore it if it was erroneously removed. This moderation will need to be followed by the appropriate statement of reasons and redress mechanisms to be sent to users when their content is taken down. They will also have to make sure that both the rightsholder (who complained or withheld the rights) and the user (who uploaded the content) are duly informed. To add, the Directives indicate that creators, authors, and performers should be kept up to date with information about the exploitation of their work. The DSA, and multiple other codes, also bring in a transparency reporting process that will be crucial for businesses to be aware of and abide by. As such, there are several further implications that businesses will need to consider when they host user content. 

Conclusion

Both directives have been transposed into national law in all EU countries since June 2021 and the DSA will be live as early as 2023 for very large online platforms. Online service providers that allow users to share copyright-protected content will need to assume more responsibilities in negotiating licensing agreements with rightsholders. More importantly, all this will also mean finding a practical balance between protecting users and rightsholders will be important. To be compliant with all aspects of these directives, as well as any future regulation or legal obligation, providers of online services will have to make efforts towards building more compliant and efficient processes and systems.

For more information on new EU digital copyrights provisions, please contact info@tremau.com.

Tremau Policy Research Team

What Does the Draft Online Safety Bill Mean for Your Business?

On May 12th 2022, the UK Parliament published its Draft Online Safety Bill, moving one step closer to this regulation’s implementation. The Bill aims to ensure that a framework for online regulation is created, allowing for the upholding of freedom of expression online while also ensuring that people are protected from harmful content.

In its current form, the Online Safety Bill seeks to ensure that a duty of care is maintained by online platforms towards their users, and that they will take action against both illegal content online, such as terrorist content, child sexual abuse material (CSAM), priority categories of illegal content (likely to include offences such as the sale of illegal items or services, revenge pornography, and hate crimes), as well as ‘legal but harmful’ content (such as cyberbullying).

Who does the Online Safety Bill apply to?

The OSB primarily applies to online services that have links with the UK. To fall into scope, they must do one of the following: 

A service is seen to have “links with the UK” if it does one of the following: 

What obligations does the Online Safety Bill set out?

The Online Safety Bill has set out proportional requirements for the companies within its scope. The Office of Communications (OFCOM) acts as the UK’s telecommunications regulatory body and will decide the categorization of companies into the following three groups (listed in order from the fewest to the greatest number of obligations): 

Category 2B – User-to-user services that do not meet the threshold conditions of Category 1 (see below).

Category 2A – Regulated search services

Category 1 – The largest user-to-user services

The specific thresholds for the Categories will be set out in secondary legislation and will depend on the size of the platform (in users), its functionalities, and the resulting potential risk of harm it holds.

Online Safety Bill UK obligations
Obligations for different categories from the Draft Online Safety Bill

How does the Online Safety Bill protect minors? 

The Online Safety Bill maintains a heavy focus on ensuring greater safety for children online, this is done by ensuring that a ‘duty of care’ is adopted by all sites towards children. Platforms that are likely to be used by children will also have to protect young people from legal but harmful content such as eating-disorder or self-harm content.

Online services will be required to allow for easy reporting of harmful content, and act on such reports quickly. Furthermore, services will be required to report any Child Sexual Exploitation and Abuse to the National Crime Agency.

Who will be enforcing the Online Safety Bill?

Under the OSB, OFCOM will be tasked to supervise the providers of online services and the enforcement of the regulation. OFCOM will have the power to open investigations, carry out inspections, penalize infringements, impose fines or periodic penalty payments, as well as request the temporary restriction of the service in case of a continued or serious offense.  Furthermore, OFCOM will be required to categorize services, publish codes of practices, establish an appeal and super-complaint function, and establish mechanisms for user advocacy.

Failure to comply with obligations can result in fines of £18 million or 10% of qualifying worldwide revenue, whichever is higher. Furthermore, criminal proceedings can be brought against named senior managers of offending companies that fail to comply with information notices from OFCOM.

What does the Online Safety Bill mean for your business? 

The OSB imposes a number of new obligations for online service providers and introduces hefty fines to ensure compliance. To avoid these, providers of online services must implement a number of operational changes. 

Most immediately, providers need to ensure they have well-designed and easy-to-use notice-and-action and complaint-handling mechanisms in place. They also need to take measures to assess the levels of risk on their platform and keep records of their assessments. It is of great importance to ensure that systems and processes are implemented to ensure that the free expression of journalistic and democratic content is protected. Strong processes, supported by technology, are key for this. 

Additionally, search services and category 1 platforms will be subject to specific rules in order to minimize the publication or hosting of fraudulent advertising. Finally, the protection of minors is central to the regulation and providers will have to implement child protection measures such as age verification and risk assessments into their platforms. 

Concretely, providers of online services will have to adopt a streamlined set of processes that allow for continuous compliance, notably with obligations such as transparency reporting. 

Next steps

It is important to note that the current version of the Online Safety Bill is a draft and is subject to change. On the 7th of September, Prime Minister Liz Truss confirmed that the government would continue with the Bill, with “some tweaks”.  One such item in the Bill which is under heavy scrutiny is its focus on ‘Legal but Harmful’ content, with opponents stating that such a point goes against free speech laws. To that end, it is essential for organisations that may be impacted to start assessing their platforms in order to ensure compliance when the Online Safety Bill does eventually come into force.

How can Tremau help you?

Tremau’s solution provides a single trust & safety content moderation platform that prioritizes compliance as a service and integrates workflow automation and other AI tools. The platform ensures that providers of online services can respect all OSB requirements while improving their key trust & safety performance metrics, protecting their brands, increasing handling capacity, as well as reducing their administrative and reporting burden. 

To find out more, contact us at info@tremau.com.

Tremau Policy Research Team

An Overview of Transparency Reporting

The Internet has created enormous potential for free, democratic, and open exchanges. Yet, some have abused this incredible tool to propagate harmful and illegal content online. This has resulted in a growing interest in holding big tech platforms accountable for the way in which they moderate content on their services. This may include, but is not limited to, openly sharing information on data collection and access, as well as removal requests. Civil society has been key in putting pressure on big technology companies to be more transparent, which has resulted in the popularity of “transparency reports”. 

Transparency is critical for the well-functioning of democratic and safe exchanges, and a requirement for fair processes, so it is not surprising that this is also becoming a centerpiece of upcoming regulations for online platforms. 

In 2010, Google became the first internet company to publish a formal transparency report, a practice that became more widely adopted by 2013, amidst growing concerns including about risks of government surveillance. Since then, a number of principles and frameworks – ranging from civil society initiatives to government policies – have been adopted around transparency reporting. Today, most frameworks target transparency about the moderation of Terrorist and Violent Extremist Content (TVEC) or Child Sexual Abuse Material online (CSAM); however, the upcoming regulations, such as the European Digital Services Act, expand transparency reporting to cover all a company’s content moderation process. 

Overview of voluntary transparency reporting frameworks 

The following table provides an overview of the main bodies and principles that guide transparency reporting today. 

TitleScopeWhat does it entail
Global Internet Forum to Counter Terrorism (GIFCT) 2017Terrorist and extremist content on online platforms• Requires its members to produce transparency reports and produces its own transparency reports.
• Through a multi-stakeholder approach, it defines the elements of meaningful transparency and holds its member tech companies accountable. 
Tech against terrorism 2017Guideline on transparency reporting on online counterterrorism efforts (targeted at Governments & small online service providers)• Asks governments to detail the processes and systems they use to discover, report, and store terrorist content and activity, and what redress mechanisms they provide.
• Provides transparency reporting guidelines for tech companies and advise on community guidelines enforcement and methods to increase transparency around content moderation processes.
Tech against terrorism 2017Terrorist and extremist content on online platforms
Santa Clara Principles 2018Targeted at online service providers that do content moderation• Recommendations for steps that companies engaged in content moderation should take to provide meaningful due process to impacted stakeholders
• Better ensure that the enforcement of their content guidelines is fair, unbiased, proportional, and respectful of users’ rights.
• Sets out foundational and operational principles as well as implementation mechanisms. 
EU Code of Conduct on Countering Illegal Hatespeech Online 2019Targeted at voluntary industry signatories • The Code of Conduct was created in 2016 in cooperation with tech companies to respond to xenophobia and racism online.
• Signatories commit to proving transparency reports and ensuring removal requests for illegal content are dealt with in less than 24 hours. 
Centre for democracy and technology 2021A framework for policymakers • Focuses on user’s speech, access to information, and privacy from government surveillance.
OECD Voluntary Transparency Reporting Framework 2022Terrorist and violent extremist content (TVEC) on platforms• A response to the problem of a variety of different frameworks, definitions, and stakes recognized in other transparency reports.
• Sets a standard for baseline transparency on TVEC. 
• Launched a portal for submitting and accessing standardized transparency reports from online services.
Tech Coalition 2022Targeted at voluntary industry signatories for CSAM• TRUST is a voluntary industry framework for transparency reporting that focuses on child sexual exploitation and abuse online.
•  It takes into account the variety of digital services in this environment as well as differences in company size and maturity. 
EU Code of Practice on Disinformation 2022Targeted at voluntary industry signatories • Created in 2018 and updated in 2022, this Code addresses disinformation, specifically in the context of Covid-19 and the war on Ukraine. 
• Requests platforms to provide monthly reports on their efforts to promote authoritative data, improve users’ awareness, and limit disinformation and false advertising; it also sets up a Transparency Centre and a Task Force to oversee the implementation of the code and keep it future-proof.  

Regulations on transparency reporting

Aside from frameworks from civil society groups and voluntary codes created in cooperation with the industry, many governments have (or are in the process of) passing laws around online hate speech that encourage transparency reporting. As mentioned above, the DSA requires all online intermediaries to provide transparency reports, the details of which vary according to the type of service. The Platform Transparency and Accountability Act in the US also aims to address this growing issue and implement transparency legislation. Similarly, the proposed American Digital Services Oversight and Safety Act of 2022 sets out transparency reporting obligations for content moderation.

Implications for online service providers

With the increasing demand for accountability and transparency from online platforms as well as governments, it is not surprising that numerous frameworks for transparency reporting have come up. Despite the variations, at its core, transparency reporting entails a clear and consistent account being kept of requests for removal or restriction of content. 

Conclusion

To ensure alignment with industry best practices and compliance with regulatory requirements for transparency, companies will need new processes and tools that are effective at handling and organizing large volumes of content moderation activities, and which are continuously aligned with rapidly evolving expectations and requirements.  Concretely, this means having the ability to track all actions taken on user content, all notices coming from every potential source, and even further, track all complaints about any content moderation decisions taken by the online service. Streamlining and unifying these workflows will be crucial for all players to remain compliant and ensure the trust of their users.

To find out more, contact us at info@tremau.com.

Tremau Policy Research Team

Regulating Online Gaming: Challenges and Future Landscape

The online gaming industry is booming. With an annual growth rate estimated at 12.1%, the global gaming market size will reach 435 billion USD by 2028. While the video game industry has been a vibrant market since the 1990s, the Covid pandemic brought an unprecedented change to the industry. During the lockdown, online gaming became a major channel for people to connect with friends and strangers – transforming gaming from only entertainment to a social experience. However, serious problems have also emerged in these new social spaces. 

An overwhelming majority of the community states that they have encountered online harassment while gaming. More dangerously, extremist content finds new forums for propagation and mobilization in these channels, as seen in recreations of mass shooting scenes with multiple connected devices. Thus, it is now more important than ever to pay attention to online trust & safety in the gaming industry. 

Challenges to online gaming regulations

Effective strategies for regulating user-generated or interactional content are largely missing in the traditionally self-regulated online gaming industry. Conventional regulations on video games – such as age-rating systems based on the degree of violence, strong language, sexual content, and other illicit practices – only apply to content released by developers and are yet to extend to user-generated content. A rating system works well for console games that usually do not have user-interaction features. However, for games that involve multiple connected players and allow real-time interaction among them, an ex-ante rating system cannot evaluate the risk of exposure to harmful or illegal content created by other gamers. 

Lists of banned words and user report systems are widely implemented across games, but both have considerable limits. Banned words can be easily circumvented by inventing slang terms and can snowball into censoring content that is not necessarily harmful or illegal. Furthermore, report systems often suffer from overburdening, the inconsistency of decisions made by human moderators, and the algorithm’s failure to understand nuanced cases.

Apart from specific technical implementation issues, business considerations also affect content moderation in online gaming. A key problem is that gaming platforms demonstrate very different standards in content moderation and are not governed by a clear and consistent regulatory framework. For example, Nintendo is famous for its particularly strict content moderation policy due to its family-friendly brand, whereas other studios that produce mature-rated or adult-only games hold a relatively tolerant attitude towards deviant speech and conduct.

Future trends in regulating online gaming

Concerning the unique “interactional risk” in online gaming with social features, a major trend is to combine child protection law with legal prescriptions for online content moderation, since these vulnerable gamers have long been active participants in the industry.

Germany’s youth protection law amended in April 2021 now integrates the in-game communication environment into the reformed age-rating standard for video games, and those with unlimited chat functions will receive a higher age-rating. On the other hand, the UK Draft Online Safety Bill published in May 2022 also gives special focus to online content accessed by children, stating that platforms hosting user-generated content have tailored duties for minimizing harmful content’s presence, reporting abusive content against children, and assisting law enforcement departments if needed. 

In the European Union, another crucial change is to put the online gaming industry under the general regulations for online platforms that provide hosting services. The recent European Regulation on preventing the dissemination of terrorist content online and coming Digital Services Act (DSA) in the European Union, are also going to impact the online gaming industry, irrespective of gaming companies’ countries of establishment. 

Indeed, according to DSA, gaming companies will now be obliged to:

More importantly, in cases of detection of illegal content, gaming companies are now expected to assume more responsibilities, including:

For “very large online platforms”, there will be extra requirements for risk assessment, independent audit, transparency report, etc., which may possibly affect major players in the market, such as Microsoft, Sony, Nintendo, and Steam, if the industry keeps expanding at the current rate. In response to the DSA, the European gaming industry is calling for more detailed and nuanced regulations to address the complex and diverse services in the ecosystem. However, one key trend is certain: online gaming platforms will no longer stay self-regulated without direct intervention from governments, and they will be held accountable for not investing enough effort in combating their users’ illegal speech and conduct.

Tremau Policy Research Team

Europe’s Digital Future: The Impact of Digital Regulations on Online Service Providers

Twenty years after the e-Commerce directive, the EU has significantly increased the number of regulations impacting online service providers. From the Digital Single Market Strategy in 2015 to the Commission’s vision for a digital transformation by 2030, the EU’s objective has been to create an environment where digital networks and services can prosper, and that citizens, businesses, and the larger civil society can benefit from the same. As part of these various strategies, the EU has passed multiple regulations and directives in the last five years, and has many more pieces of legislation to be implemented in the coming years, that impact content on digital platforms. 

If you are an online service provider, the following regulations can directly or indirectly impact you:

Digital Regulations on Online Service Providers

Timeline of European regulations concerning the digital single market

Regulation Type Date Aim Penalty
E-Commerce Directive Directive 2000
  • Remove obstacles to cross-border online services in the EU market and enhance competitiveness of European service providers.
  • States that OSPs cannot be held liable for third party illegal content.
Enforcement, penalties and sanctions vary from one member state to another.
Code of Conduct on Hate Speech Voluntary Code 2016 Prevent and counter the spread of illegal hate speech online. N/A
Code of practice on disinformation Voluntary Code 2018 Commitments such as transparency in political advertising and closure of fake accounts, to demonetisation of
purveyors of disinformation.
N/A
General Data Protection Regulation (GDPR) Regulation 2018 Protect user’s personal data.

Restrict non-consensual processing and movement of data.

Facilitate business in the digital single market.

20 000 000 EUR or 4% of the firm’s worldwide annual revenue.
Directive on copyright and related rights in the Digital Single Market Directive 2019 Ensure fairer renumeration for creators and rightsholders, press publishers and journalists, in particular when their works are used online.

Obligations include obtaining authorisation from rightsholders for content uploaded on the platforms of online content sharing providers.

Enforcement, penalties and sanctions vary from one member state to another.
Audiovisual media services directive (AVMSD) Directive 2020 Provides EU-wide media content standards for all audio-visual media, including video-sharing platforms.

Protection of minors against harmful content and reinforced protection against incitement to violence or hatred.

Enforcement, penalties and sanctions vary from one member state to another.
Promoting fairness and transparency for business users of online intermediation services Regulation 2020 Ensure that users are granted appropriate transparency, fairness, and effective redress possibilities.

Applies to online intermediation services and online search engines.

Member states shall ensure that effective, dissuasive, and proportionate penalties are applied to infringements.
Terrorist Content Online Regulation Regulation 2022 Ensure that hosting service providers take down identified terrorist content within an hour following a removal order from relevant authorities. 4% of the platform’s annual turnover.
Digital Services Act Regulation 2023** Establish transparency and accountability frameworks for platforms, and encourage innovation, growth and competition within the European market.

Obligations include measures to counter illegal goods, services, or content online, and trace sellers of illegal goods.

Audit and transparency measures as well as complaint mechanisms to be implemented.

6% of platform’s annual global turnover.

Platforms that refuse to comply can be taken to court and given a temporary suspension.

Data Governance Act Regulation 2023* Outlines rules on who can use and access data generated in the EU.

Applies to businesses, consumers, and the public sector.

Facilitates data sharing across sectors and member states.

Administrative fines: 4% of total worldwide annual turnover or 20 000 000 EUR.

Member states can also implement additional penalties.

Proposal for regulation on General Product Safety Regulation 2024* Address product safety challenges.

Enhance market surveillance of dangerous products in the EU.

Increase protection of EU consumers.

4% of the economic operator’s or online marketplace’s annual turnover.

Member States may choose to impose periodic penalty payments.

Proposal for regulation on AI Regulation 2024* Ensure that AI practices and systems in the market are safe and respect existing law, values, and fundamental rights.

Establishes a list of prohibited AI systems whose use is considered unacceptable.

Applies to providers of AI systems, users, and other participants across the AI value chain.

Non-compliance: 30 000 000 EUR or 6% of total worldwide annual turnover.

Supply of incorrect, incomplete, or misleading information: 10 000 000 EUR or 2% of total worldwide annual turnover.

Proposal for Regulation laying down rules to prevent and combat child sexual abuse Regulation 2025* Combat child sexual abuse material (CSAM) online with obligations for online service providers to detect, report, and remove CSAM from their services.

Obligations include mandatory risk assessment and risk mitigation measures, reduction of exposure to grooming, proactive content detection, effective removal, reporting obligations, data collection and transparency obligations, and single point of contact.

6% of the annual income or global turnover of the provider.
*The marked dates refer to proposed dates of implementation.
** The DSA is applicable for very large online platforms from 2023 and all other online services from 2024.

Code of Conduct: Voluntary initiatives that establish self-regulatory standards to achieve an objective.

Directive: Requires EU countries to achieve certain objectives but gives the country flexibility in how they choose to do so. Countries must incorporate directives into national law.

Regulation: Legal acts that apply automatically and uniformly to all EU countries as soon as they are in force, without needing to be transposed into national law. They are binding in their entirety on all EU countries.

The acceleration of legislation passed over the last few years, as well as the multiple that are under development, are indicative of the rapidly changing expectations of different stakeholders in the digital environment. The pace of legislation does not seem to slow down and, within a dynamic environment like the internet, issues of trust & safety and compliance become even more important and relevant.

More importantly, for businesses, this creates a new and increasingly complex environment to navigate. As new obligations emerge, Tremau is committed to helping online platforms steer through the European legal ecosystem by facilitating their understanding of how such laws impact their business and operational models, and providing them with the solutions to ease their compliance efforts.

To find out more about how the TCO will impact you, contact us at info@tremau.com

Regulation on Terrorist Content Online: What does it mean for you?

As of June 7th, 2022, the EU Regulation on preventing the dissemination of terrorist content online (TCO) will be applicable across Europe. This will have a significant operational impact on all hosting service providers (HSPs) offering their services in the EU, irrespective of their place of establishment. In short, the regulation aims to ensure that HSPs take action against terrorist content online and introduces the “golden hour” rule. HSPs will have one hour to respond to law enforcement’s requests to remove terrorist content from their platform.

Who does the Terrorist Content online regulation apply to?

To all hosting service providers (HSP) – meaning any provider that:

This includes social media platforms, video streaming services, multimedia file-sharing services, and other cloud services. HSPs that have users in one or more Member States or target their activities towards one or more Member States will also be impacted. 

New Obligations for HSPs

As an HSP under the scope of the TCO, below are some of your new obligations:

Single point of contact and legal representativeEstablish a single point of contact who will receive removal orders or referrals. Designate a person as your legal representative in the Union who can be held liable for non-compliance.
Remove / Disable Access Remove or disable access to the terrorist content within 1 hour of getting a removal order. 
Implement proactive measures Implement rapid content assessment measures, including automated tools, in case of a referral. Mitigate and manage risk and level of exposure on your service. Implement human oversight and verification procedures for automated detection and removal tools.
Preservation of content and dataPreserve removed or blocked content for 6 months, and longer if requested by authority. Apply technical and organisational safeguards on preserved content.
Complaint mechanismEstablish mechanisms that allow users to contest the removal/blocking of content they had uploaded and to request that it be reinstated. Address every complaint without delay and reinstate the content if removal is unjustified.
Information to usersInform users about the removal/blocking of their uploaded content. Upon request, give reasons for the action taken, unless determined otherwise by competent authorities. 
CooperationIf you become aware of any terrorist content, promptly inform relevant authorities
TransparencyClearly communicate your policies and measures to prevent the dissemination of terrorist content in your terms and conditions. Publish annual transparency reports on all measures taken to comply, including the use of automated tools. Reports should also note the amount of content removed/blocked as well as an overview of the complaint procedures in place.

Implications for HSPs

From 7th June onwards, HSPs will need to take a number of steps to be compliant with the TCO – the most immediate being setting up a single point of contact and communicating it to the Member States. 

Failure to comply with the above obligations can result in penalties up to 4% of the HSP’s global turnover.

Other operational implications include setting up measures to mitigate the risk of exposure to terrorist content on their services, implementing procedures to assess notified content, adopting proactive measures to protect their users and services, establishing complaint mechanisms, and finally, setting up procedures to routinely publish transparency reports on different aspects of their obligations.  

Tremau’s solution provides a single trust & safety content moderation platform that prioritises compliance as a service and integrates workflow automation and other AI tools. The platform ensures that HSPs can respect all TCO requirements while improving their key trust & safety performance metrics, protecting their brands, increasing handling capacity, as well as reducing their administrative and reporting burden. 

Tremau Policy Research Team

Image from: https://www.consilium.europa.eu/en/infographics/terrorist-content-online/

Are Our E-Commerce Platforms Safe and Sustainable?

Since the birth of the commercial internet in the mid 90s, e-commerce platforms have radically transformed the ways we transact and consume, with significant impacts on the economy and on all affected industries. Between 2018 to 2020, major economies saw a 41% rise in online retail sales. This is currently forecasted to grow to $7.5 trillion, which would account for around 24.5% of total retail sales. However, much like with other corners of the online world, not all impact has been positive. While online trade volume massively increased, so did the trade of illegal products online. This creates safety and sustainability risks both for the overall economy and the online platforms themselves.

Figure: E-commerce as % of total retail sales worldwide – Source: Statista

Illegal Products “Flourish” on e-Commerce Platforms

According to a study in 2019, trade in counterfeit and illegal goods stood at 3.3% of the multi-trillion global trade and represented 6.8% of imports from non-EU to EU countries – with a significant part of this activity happening online. Covid-19 lockdowns further boosted the number of “risky sellers” on e-commerce markets who offer illicit goods that may even be a threat to public health, such as substandard medicine, unreliable test kits, and other Covid PPE related goods that may be of lower quality. These underline the gravity of the risks e-commerce platforms may create for both society and the economy, but also their own business.  

As demonstrated by a recent OECD study online sales represent 60% of global seizures of dangerous products destined for the EU. Consider, for example, public health risks. One can buy plenty of ineffective prescription pills, unsafe materials, or substandard ingredients in lipsticks and baby formula on the internet – all of which can jeopardise consumers’ health and safety. 

The most commonly traded category of dangerous fakes includes perfumery and cosmetics, followed by clothing, toys, automotive spare parts, and pharmaceuticals. Counterfeits in electrical goods can also be dangerous as they are not put through the same safety checks as legitimate items. Aside from harm to consumer health, consumers also risk compromising their personal data on websites selling harmful goods and may be more susceptible to scams

Counterfeits Cause Significant Value Destruction for the Economy and Online Platforms

Illegal and “unsafe e-commerce” can create significant value destruction for the economy and online platforms. Furthermore, it creates a market of lemons that is known to significantly hurt economic activity as it erodes the trust of people transacting on these online platforms. For marketplaces and e-commerce platforms, product recalls, potential liability claims, consumer confusion, and brand dilution present substantial costs and concerns. Almost 75% of brands have lost money due to counterfeit goods being sold online, with 42% losing up to 10% of their sales. Buyers may leave when it becomes harder to verify the credibility of vendors and the authenticity of products. The high prevalence of counterfeits also makes platforms less attractive for legitimate sellers. The end result is a loss of trust and eventually the loss of sales.

Indeed, consumers, vendors, and online marketplaces have varying stakes in curbing the growth of counterfeit products. As noted, online marketplaces are particularly susceptible to adverse selection due to information asymmetry between buyers and sellers, an important issue which is not currently well mitigated. Consumers are often lured to fakes due to low prices, which perpetuates the growth of counterfeits. Moreover, its dangers have created a problem of trust and its prevalence has wider negative socioeconomic impacts such as the displacement of jobs and of legitimate economic activity. Ensuring the safety and legitimacy of e-commerce platforms is therefore not only a social and moral imperative but also a significant economic one for both the global economy and the online platforms themselves.

Online Platforms under Increasing Regulatory Scrutiny

This is why, almost a quarter of a century after the birth of the commercial internet, regulators are working on new laws with significant implications for e-commerce platforms and impact on how we trade and consume. These include regulations such as the upcoming EU Digital Services Act (see side box) and the EU General Product Safety Regulation, the Canada Consumer Product Safety Act (2011), the UK Consumer Rights Act (2015), and the voluntary Internet Charger to fight counterfeit in France. 

What are the DSA requirements for e-commerce platforms? 

The EU Digital Services Act is a comprehensive legislation that aims to create a safe, trustworthy, and transparent environment for all the actors present on digital platforms. It builds on the e-Commerce Directive (2000), laying out more detailed obligations proportional to the size of different digital services, and defines online marketplaces as platforms that allow consumers to conclude distance contracts with traders. To comply with the DSA:

– Do you collect and store/Can you provide essential information about your traders upon regulatory request?
– Is all your information on traders verified and updated? Do you have KYBC capabilities?
– Do the algorithms driving advertisements on your platform risk subverting your customers’ choices in unfair or other illegal ways?
– Do you let your consumers know of illegal or counterfeit products they may have purchased as well as give them information about the trader they purchased from?
– Do you have a notice mechanism for illegal or counterfeit products on your platform? 

Upcoming Regulatory Requirements for Online Platforms

These regulations, along with the rising demands on online consumers and civil society, will have profound implications for online marketplaces, aiming at “fixing e-commerce” to ensure that the internet remains true to its original mission: to create enormous value for society and businesses without raising new significant risks. Online platforms will need to put in place new processes and systems to comply with these regulations, including complex case management systems designed in line with the new regulatory requirements, connectivity with potentially hundreds of newly formed Trusted Flagger organizations across multiple countries, KYBC/KYC systems, transparency reporting capabilities, and many others.

Once the DSA comes into force, online marketplaces will need to comply with certain obligations that intend to protect consumers, traders, and platforms in this new environment. With respect to platform design, providers will no longer be able to rely on “dark patterns”. This refers to design techniques that nudge consumers toward decisions such as features that make it difficult to sign out from the marketplace or default settings that are difficult to change. In short, any element that subverts and impairs autonomy, decision making, and choice cannot be included in the platform interface. 

Another obligation focuses on the traceability of traders on the platforms. Platforms will need to implement KYBC systems and processes. Traders that wish to sell their products on the platform must provide essential information – such as name, where they are registered, copy of ID, etc. – to the platform, which will then be stored for a reasonable period of time in case of liability risks. This information will need to be verified by the platform and if the trader fails to provide updated or more accurate information upon request, the platform can suspend the provision of the trader’s services. 

Furthermore, when the platform becomes aware of any illegal products or services, it needs to inform the consumers who purchased illegal products or services of their illegality as well as the identity of the traders and any means of redress. If the consumers cannot be contacted, the information needs to be made publicly available and easily accessible. 

By putting such obligations of transparency, traceability, and verifiability in place, the regulation prioritizes consumers and aims to protect different actors and stakeholders in the e-commerce ecosystem. Platforms that comply can expect a positive impact on their brand value and to protect, retain, and grow their user base. To adapt to this new e-commerce world and comply with varying regulations, platforms and online service providers will need to put retroactive processes in place and incorporate trust by design. 

Towards E-Commerce Sustainability

Today, consumers expect safe and reliable e-commerce platforms, businesses seek to reduce costs that ‘markets of lemons’ create, and regulators prepare new laws to counter the plethora of harms on the digital space. As we move forward, raising the level of trust in how online marketplaces operate is fundamental and will increasingly be translated into business value. To this extent, future regulations, along with voluntary actions, are an important step towards securing consumer trust and mitigating business and socioeconomic risks. Building a ‘next generation’ online commercial space will require investments and changes by all actors involved. It is only by making these necessary changes that we will be able to ensure the sustainable growth of e-commerce and the survival of a healthy and competitive online industry.

For more information about the new regulatory requirements, you can contact us at info@tremau.com

Tremau Policy Research Team

New EU Proposal to Combat Child Sexual Abuse Online

On 11 May 2022, the EU Commission proposed a new set of rules to combat child sexual abuse material (CSAM) online, laying out new obligations for providers to detect, report, and remove CSAM from their services. This follows efforts such as the 2020 EU Strategy for a More Effective Fight Against Child Sexual Abuse and falls under their recent strategy on the Rights of the Child. 

Concretely, the new proposed regulation builds on the upcoming Digital Services Act and it aims to replace the current interim solution regarding the processing of personal and other data for the purpose of combating online child sexual abuse

New obligations for Online Service Providers

The obligations set forth in the new proposal are directed at all online service providers (OSPs) operating in EU Member States, including providers of hosting, interpersonal communication services, and app stores. Currently, the new obligations discussed include:

Mandatory risk assessment and risk mitigation measuresOnline Service Providers (OSPs) will be required to assess the risks of their services being misused for grooming (solicitation of children) or for the dissemination of CSAM.

Appropriate risk mitigation measures will subsequently need to be taken by the OSPs.

OSPs will be required to report the results of risk assessments to the competent national authorities in their relevant Member State.
Reduced exposure to groomingApp stores will need to assess whether any apps on their platform are at risk of being used for solicitation.

Reasonable measures should subsequently be taken to identify child users and prevent them from accessing such apps.
Proactive content detectionProactive content detection should be carried out by OSPs using indicators of child sexual abuse verified and provided by the EU Centre.

Detection technologies put in place by OSPs should only be used to detect child sexual abuse.

OSPs will need to prove that the technology used for proactive content detection is proportionate.

Effective removal
National authorities can issue removal orders in cases where the CSAM is not swiftly taken down and hosting providers will be required to disable access to a server hosting CSAM that cannot be taken down.
Reporting obligationsOSPs have to report any detected CSAM to relevant authorities and the newly created EU center.
Data collection and Transparency obligationsOSPs will be required to collect aggregated data relating to their processes and activities under this regulation and make the relevant information available to the EU Centre.

An annual transparency report should be published and made accessible to the general public.
Single point of contactOSPs should establish a single point of contact for direct communication with Coordinating Authorities, other competent authorities of the Member States, the Commission, and the EU Centre.

Enforcement Measures and Heavy Penalties

The regulation presented by the Commission also proposes the creation of an independent EU Centre on Child Sexual Abuse that will act as a “hub of expertise, provide reliable information, identify and analyse any erroneous reports, forward relevant reports to law enforcement, and provide victim support”. The EU Centre will work alongside online service providers, national law enforcement agencies and Europol, Member States, and victims.

In addition, and in line with the Digital Services Act, Member States will be required to designate competent authorities, called Coordinating Authorities, who will be responsible for the application and enforcement of the regulation. They will have the power to impose fines or request a judicial authority in the Member State to do so, the power to impose a periodic penalty payment to ensure that an infringement of the Regulation is curbed, and the power to adopt interim measures to avoid the risk of serious harm, to name a few.

Such penalties for infringements can go up to 6% of the annual income or global turnover of the preceding business year of the provider.

Next steps in the legislative process

The proposed regulation has a long way to go before being adopted: the initial proposal of the Commission will need to be agreed on by both the European Parliament and EU Council (separately and collectively). This process is likely to take up to two years.  

The proposal comes as part of a European strategy for a Better Internet for Kids (BIK+) that rests on the pillars of creating a safe digital environment, empowering children in the digital world, and improving children’s active participation.

For further information on this regulation and how it can affect your organization you can contact us at info@tremau.com

Tremau Policy Team

Join our community

Stay ahead of the curve – sign up to receive the latest policy and tech advice impacting your business.