In short: Yes. Having a comments section on your platform qualifies your services, at minima, as a hosting service. More precisely, a hosting service in the meaning of the DSA is a service that stores user generated content (UGC). Perhaps owing to the fact that the definition is rather simple, it also encompasses a whole host of services, which are very different in type, size and business model. The category includes webhosting companies like AWS or GoDaddy, file storage and sharing like Dropbox, marketplaces like Amazon and Alibaba, as well as social media services like Facebook. Because there are no exemptions based on size or type of the service, hosting services also include some services, which may come as a surprise, such as the comments sections of online newspapers and blogs as well as user reviews on e-commerce websites.
Short answer: It depends.
Read the full version of the blog on Utopia.
Given the increasing focus on trust and safety and the responsibilities of actors across the Internet ecosystem, regulation has gradually shifted focus on transparency requirements. What are the processes that must be in place to deal with illegal content while protecting fundamental rights and due process? The Digital Services Act (“DSA”) is quite clear: if a company’s services are to be considered safe and trusted, transparency is non-negotiable.
If there is one place in the Internet where transparency can provide some much-needed insight regarding content moderation, that would be its infrastructure. The infrastructure of the Internet is a space consisting of various actors who provide everyday services that allow users to have a seamless, reliable, and secure Internet experience; however, it generally attracts little attention because it is obscure and, predominantly, technical. Actors on this level consist of conduit, caching, and hosting services, seen in companies such as Cloudflare, Amazon Web Services, and Google Play Store, to name a few. Their operations are crucial, yet they often seem distanced from the public discourse; they are often considered inaccessible and, occasionally, unaccountable to everyday users.
The question, therefore, is whether the DSA could help shed some light into the practices of these otherwise invisible actors. Does the DSA manage to create a consistent and predictable environment for infrastructure providers that could help alleviate some of the opaqueness with their content moderation practices?
Read the full version of this article on the Tech Policy Press.
By: Konstantinos Komaitis, Louis-Victor de Franssu, Agne Kaarlep
It has been around two weeks since Elon Musk, the world’s richest man, acquired Twitter and, already, increasing fears of what this means for free speech on the microblogging platform have begun to proliferate. With Musk firing some of Twitter’s key personnel, including Twitter’s legal chief Vijaya Gadde and terminating contracts with outsourced content moderators, many users are looking for an alternative.
A substantial number are migrating to the ‘fediverse,’ and specifically to Mastodon, a similar microblogging platform that has been called “Twitter, with the underlying architecture of email”. Mastodon’s decentralization raises substantial questions about how existing regulatory regimes, such as Europe’s Digital Services Act (DSA), will apply.
Read the full version of this article on the Tech Policy Press.
By: Konstantinos Komaitis, Louis-Victor de Franssu
Picture: Flickr / Marco Verch Professional Photographer
With every third internet user being under the age of 18, online child sexual abuse has become a global public safety issue — producing a generation of victims. The WeProtect Global Alliance estimates that a staggering 54% of those who regularly used the internet as a child (now aged 18-20) were the victims of at least one online sexual harm. The stigma that still surrounds child sexual exploitation and abuse makes it likely that what we know is only the tip of the iceberg, and that our statistics underestimate the prevalence of the issue.
Though highly alarming, sexual exploitation and abuse are just one form of illegal or harmful content or conduct impacting young people online. Cyberbullying, impersonation, trolling, harassment, exposure to hate speech, encouraging self-harm, identity theft and phishing aimed at children are also on the rise. Consequences range from cautionary tales to harrowing tragedies. For example, Italy ordered TikTok to block anyone whose age could not be confirmed, following the death of a 10-year-old who attempted a dangerous challenge. We are also just learning that young people, regardless of gender, are susceptible to eating disorder trends that can be amplified by social media.
Read the full version of this article on the World Economic Forum.
By: Jacqueline Beauchere, Theos Evgeniou, Louis-Victor de Franssu
On 8 February 2022, Safer Internet Day, e-Enfance – a French NGO fighting against children’s bullying and online harassment – launched a nationwide app, Application 3018, to facilitate the reporting of cyber harassment. The application is combined with a dedicated online trust & safety platform that enables faster victim support and more efficient removal of harmful content by online platforms.
The Internet Watch Foundation reported 2021 to be the worst year for child sexual abuse online with a surge in cases of online grooming over lockdown. A recent study conducted by WeProtect Global Alliance revealed that in almost 70% of cases, the respondents, between 18-20 years old, had experienced online sexual harm in their childhood.
These statistics paint a disturbing picture and underscore the importance of interventions to protect children online.
The launch of Application 3018 comes in the wake of French President E. Macron’s call to governments and other relevant providers and organisations to “stand up for children’s rights in the digital environment”.
The application was created to specifically allow children to report instances of cyberbullying, ranging from inappropriate text messaging to sexual abuse material – all through an easy-to-use interface.
Last year, e-Enfance noted a 60% rise in cyber-harassment cases and reported receiving approximately 18000 calls on their hotline from victims. Victims filed complaints in only 34% of the cases.
Within the first week of the introduction of the new system, e-Enfance saw a significant (i) increase in the number of cases treated (+30%) while (ii) reducing the amount of time needed to assess and subsequently notify the relevant online platform. Given that 62% of children exposed to sexually explicit content receive it on their phones, the app provides a better interface for them to report than the traditional webpage forms.
As we continue monitoring the performance of this tool, we predict a further increase in notices sent through the app and an improvement in the speed of treatment directly increasing the number of content reported to platforms.
As the digital space increasingly permeates every aspect of our lives, from socialisation to e-commerce to education, we are faced with an abundance of opportunities and risks. Both the EU Code of Conduct and the Digital Services Act will have a significant impact on moulding this space to better protect citizens’ interests, especially those of children.
To protect children online, responsibility and compliance will be needed from every actor and stakeholder involved. Upcoming regulations impose new reporting obligations to trusted flaggers requiring them to update their existing report management processes enabling transparency reporting and the production of audit trails. The platform used by e-Enfance tackles all these requirements while leveraging innovative and highly secure digital and AI solutions, to achieve a compliant and safe environment for all.
We invite everyone, civil society, online platforms, regulators and governments, to join us in building a digital world that is safe and beneficial for all, especially children.
by Louis-Victor de Franssu,Toshali Sengupta
In a reversal of its long-held practice of “privacy first”, Apple announced in August 2021 that it would launch a new feature to scan images and videos on its devices in order to detect stored child sexual abuse material (CSAM). The policy shift epitomises the major changes happening today both in regulations and in businesses aiming at ensuring a responsible use of technology and a safe digital space. Yet, Apple’s new policy raised so many concerns from security and privacy experts that the company has delayed its plan.
The concept of a digital safe space is not limited to the proliferation of CSAM. Intermediary service providers, i.e. any firm that connects people, such as social media, marketplaces or online platforms for disseminating user generated content, face a growing number of abuses of their services. These include the spread of hate speech, terrorist content, illegal goods and services, spam and disinformation.
In fact, every year intermediary service providers around the world detect and remove billions of pieces of content from their platforms because the content is either illegal or contrary to their terms of service.
This affects small as well as giant platforms. Thousands of small online platforms have become home to a massive amount of illegal content posted by their users every month. Facebook identified more than 500 million pieces of such content in 2020 (1.3 billion, including spam) and spends hundreds of millions of dollars on content moderation. This content is so extreme and violent that people moderating it are reported to often suffer mental health issues.
Of course, the issue of illegal or harmful content did not appear with the rise of digital services. But the scale and speed at which such content can spread and be amplified by malicious actors who have become increasingly sophisticated, is worrying.
This has raised alarms for governments around the world which are designing new regulatory frameworks to mitigate some of these risks, with important implications not only for the future of society but also for the businesses they intend to regulate. However, achieving a safe digital space has and will continue to prove significantly challenging for regulators and companies alike.
Democratic governments attempting to regulate the online space must grapple with contradictory objectives. They need to balance between, on the one hand, keeping the internet safe by mandating platforms to prevent the spread of illegal content and, on the other, ensuring that fundamental human rights, including freedom of speech, are protected online.
With more than 95 million photos uploaded daily on Instagram, to name one platform giant, the sheer volume and potential for virality of content posted online makes ensuring judicial review prior to content removal nigh on impossible. Governments must therefore rely on setting out obligations for the private sector to moderate illegal content based on specific regulatory principles. But the more stringent the rules, the higher the risk of over-content removal and the more lenient the regulation, the higher the risks of illegal or harmful content spreading.
A related challenge for legislators is defining what effectively constitutes illegal content in a way that is broad enough to cover the targeted harms and specific enough to avoid the risks of censorship creep. Impractically broad definitions present serious risks for freedom of expression. Many worry that this difficulty could lead to political censorship in less democratic countries that would attempt to define rules without the proper safeguards.
Moreover, such regulatory definitions could leave substantial grey zones, requiring companies to decide on whether to remove content based solely on their discretion. This ambiguity combined with pressure on platforms to act as soon as such content is detected increases the risks of over-censorship, with important repercussions on freedom of expression online.
Another difficulty faced by regulators is how to implement effective obligations while ensuring competition within markets. This means finding the right balance between imposing minimum requirements for all related services without creating barriers to either innovation or market entry.
In an attempt to find fit-for-purpose solutions to these dilemmas, democratic governments and some of the largest digital services initially launched a series of self- and co-regulatory initiatives, like the Facebook white paper on regulation, or the EU Code of Conduct. Yet, outcomes were not always deemed sufficient by regulators which instead have started to develop new frameworks obliging online platforms to address detected illegal content or else face severe penalties.
In general, these new regulatory approaches can be divided into two broad categories: content-specific and systemic. The first consists of designing legislation to target a single specific type of online harm such as copyright infringements, terrorist content or CSAM and focuses on the effective and timely removal of that content. Examples of such regulations include the European Union’s Terrorist Content Online Regulation, the French law on disinformation, the German Network Enforcement Act (NetzDG) as well as the Directive on Copyright in the Digital Single Market.
In contrast, the systemic approach aims at providing a cross-harm legal framework whereby online companies must demonstrate that their policies, processes and systems are designed and implemented to counter the spread of illegal content on their platforms and mitigate potential abuses of their services while protecting the rights of their users. This is the direction proposed in the recent Online Safety Bill in the United Kingdom and the Digital Services Act (DSA) in the European Union.
In the case of the DSA for example, first presented by the Commission in December 2020, the legislators do not modify the existing liability regime, nor do they define illegal content online. Instead, the Commission sets new harmonised responsibilities and due diligence obligations for intermediary service providers: They must have in place processes and procedures to be able to either remove or disable content from their platforms when they find out that it is illegal. These regulations have implications for all intermediary service providers that go beyond potential large financial penalties.
Firms will need to move from the culture of “move fast and break things” to a more reasonable “move fast and be responsible” as they comply with complex cross-jurisdictional demands while maintaining customers’ trust. A shift towards a risk-based approach – already the path some regulators take, as the EU proposal on regulating AI indicates – requires organisational changes and the development of new risk management frameworks. Affected businesses need to understand the operational implications of the new regulatory obligations, assess their ability to comply and implement the appropriate risk mitigators.
Lessons from other sectors, such as finance, can prove useful. Much like in those sectors, online platforms will need to develop new policies and procedures, and then implement technical solutions. They will also need to create new roles and responsibilities, ultimately leading to organisational and cultural changes within their businesses.
First, companies, regardless of their size, will need to put processes in place to address the illegal content that they have been made aware of from a number of different sources, such as national competent authorities, the platform’s users or its internal moderation systems. They will also need to develop content moderation management processes and tools to ensure transparency, fairness, safety and compliance across different jurisdictions. These will unavoidably add cost and operational complexity for all online platforms.
For example, the European Commission estimates the annual cost of implementing and operating such tools, which includes content moderation management or transparency reporting workflows, can reach tens of millions annually for the larger players.
Second, new transparency requirements for online advertising call for online platforms to develop dedicated processes and tools to provide information to their users concerning the advertiser and their target audience. Additionally, providers of online marketplaces will also be required to enact Know your business customer policies and collect identification information from users operating on their platform. This obligation is largely inspired by similar requirements in the financial industry, adopted to limit the risks of money laundering.
And third, very large online platforms (VLOPs) will be subject to further requirements, including the obligation to conduct annual risk assessments on significant systemic risks stemming from the use of their services. These assessments will need to include risks related, for example, to the dissemination of illegal content through their services and the intentional manipulation of their platforms. While the EU Commission does not provide, at this stage, any advice on the risk assessment methodology, the DSA contains an initial list of potential risk-mitigation measures.
The development of an effective risk management framework will also require the set-up of a well-balanced enterprise organisation and risk culture, aligning compliance objectives with regulatory obligations, business and growth models, and reputation risk management. In fact, through the DSA, the European Commission will require that an organisation’s chief compliance officer has sufficient financial, technological and human resources as well as the adequate level of seniority to carry out the expected tasks. While these obligations target solely VLOPs, online platforms desiring to scale and expand their business across multiple jurisdictions within the EU will benefit from early adoption of such organisational structures.
Yet organisational changes will not be sufficient by themselves. As they grow, online platforms will need to move away from a Facebook culture to one of compliance where the firm’s systemic risks are understood and where employees are empowered to do the right thing.
Almost two decades after the first social media platforms arrived on the internet, revolutionising the ways human beings interact, communicate and do business, we have come to a bit of an impasse. The talk about regulating these businesses has amplified globally, especially given the potential impact social media can have on our political and socioeconomic systems. These platforms can become home to different communities but also targets of illegal content postings and coordinated attacks. The upcoming regulations under development across multiple jurisdictions will not change this but will force the digital industry to adapt to a new paradigm and to find innovative solutions to tackle harmful and illegal online content.
This is an adaptation of an article published in WEF Agenda.
François Candelon, Managing Director, the Boston Consulting Group; Louis-Victor de Franssu, CEO, Tremau; and Theodoros Evgeniou, INSEAD Professor of Decision Sciences and Technology Management | October 25, 2021
François Candelon is a Managing Director and Senior Partner at the Boston Consulting Group. He is also the Global Director of the BCG Henderson Institute.
Louis-Victor de Franssu (INSEAD MBA ‘18D) is a co-founder and CEO of Tremau.
Theodoros Evgeniou is a Professor of Decision Sciences and Technology Management at INSEAD.
INSEAD Knowledge is now on LinkedIn. Join the conversation today.
In recent years, a number of international organisations, regulators, governments, academics, and as well businesses have worked on developing principles of Artificial Intelligence (AI).
Alongside the development of these principles, there is an on-going discussion on how to regulate AI in order to best align risk management with optimising potential value creation of these technologies. Risk managing AI systems will likely become a regulatory and social expectations requirement, for all sectors and for both business and government.
However emphasis on how to implement the proposed AI principles and upcoming regulations in practice is more recent, and appropriate tools to achieve this still need to be identified and developed. For example, implementing so-called Responsible AI requires the development of new processes, frameworks and tools, among others. We review the current state and identify possible gaps.
Boza, Pal and Evgeniou, Theodoros, Implementing Ai Principles: Frameworks, Processes, and Tools (February 10, 2021). INSEAD Working Paper No. 2021/04/DSC/TOM, Available at SSRN.
By: Pal Boza and Theodoros Evgeniou
Artificial intelligence and machine learning (AI/ML) algorithms are increasingly developed in health care for diagnosis and treatment of a variety of medical conditions. However, despite the technical prowess of such systems, their adoption has been challenging, and whether and how much they will actually improve health care remains to be seen. A central reason for this is that the effectiveness of AI/ML-based medical devices depends largely on the behavioral characteristics of its users, who, for example, are often vulnerable to well-documented biases or algorithmic aversion.
Many stakeholders increasingly identify the so-called black-box nature of predictive algorithms as the core source of users’ skepticism, lack of trust, and slow uptake. As a result, lawmakers have been moving in the direction of requiring the availability of explanations for black-box algorithmic decisions. Indeed, a near-consensus is emerging in favor of explainable AI/ML among academics, governments, and civil society groups. Many are drawn to this approach to harness the accuracy benefits of noninterpretable AI/ML such as deep learning or neural nets while also supporting transparency, trust, and adoption. We argue that this consensus, at least as applied to health care, both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.
Get full access to this article
By: Boris Babic, Sara Gerke, Theodoros Evgeniou and I. Glenn Cohen
For years public concern about technological risk has focused on the misuse of personal data. But as firms embed more and more artificial intelligence in products and processes, attention is shifting to the potential for bad or biased decisions by algorithms—particularly the complex, evolving kind that diagnose cancers, drive cars, or approve loans. Inevitably, many governments will feel regulation is essential to protect consumers from that risk.
This article explains the moves regulators are most likely to make and the three main challenges businesses need to consider as they adopt and integrate AI. The first is ensuring fairness. That requires evaluating the impact of AI outcomes on people’s lives, whether decisions are mechanical or subjective, and how equitably the AI operates across varying markets. The second is transparency. Regulators are very likely to require firms to explain how the software makes decisions, but that often isn’t easy to unwind. The third is figuring out how to manage algorithms that learn and adapt; while they may be more accurate, they also can evolve in a dangerous or discriminatory way.
Though AI offers businesses great value, it also increases their strategic risk. Companies need to take an active role in writing the rulebook for algorithms.
The full version of this article appeared in the September–October 2021 issue of Harvard Business Review.
By: François Candelon,Rodolphe Charme di Carlo, Midas De Bondt, and Theodoros Evgeniou
Almost two decades after the rise of online platforms, governments across the world including in the European Union, the United Kingdom, Canada and Australia, have started designing new frameworks to regulate these services. Debate and controversy has been rife, including the recent Congressional hearing of the Facebook whistle-blower Frances Haugen and Apple’s reversal of its plan to scan images and videos on its devices to detect child sexual abuse material (CSAM). Increased regulation will have important implications both for these businesses and for wider society.
Democratic governments face numerous challenges in attempting to regulate the online space. They grapple with sometimes contradictory objectives, as they attempt to find a balance between keeping the internet safe, while also protecting fundamental rights, including freedom of speech.
Read the full version of this article on the World Economic Forum.
By: Francois Candelon, Theos Evgeniou, Louis-Victor de Franssu