When a terrorist attacked worshippers in two mosques in New Zealand earlier this year, he live-streamed the attack on Facebook. Some 200 people saw real-time how a white supremacist on a killing spree murdered dozens of Muslims with semi-automatic rifles.
Events like this increase concerns about the internet being used as a platform to spread hate and extremism. Together with a range of other content such as “fake news” style misinformation, child abuse materials, and traditional spam, this may lead to a “junkification” of the web. Over time, this could seriously affect the internet’s usefulness for everyone. It is clear that action is needed, but how?
Many relevant questions are highly controversial. For example, how do we decide what content must be removed? Who should make that decision? And if content is to be removed, should it be removed locally, regionally, or globally?
Developments in US Senate and Beyond
Last week, a U.S. Senate Committee held a hearing on mass violence, extremism, and digital responsibility. Representatives from Facebook, Google, and Twitter were questioned about how they address such content. The problem, however, is that this is an international, cross-border issue. Domestic, uncoordinated measures that are implemented in reaction to incidents are unlikely to be effective.
The fact that no single state or region can effectively regulate the entire internet is both comforting and distressing. It is comforting because there are many states I would not like to see in control over what content is accessible. And it is distressing because it means that fighting clearly undesirable content becomes very difficult due to lacking international coordination, and limited international consensus.
Put simply; the problem is that while we expect our state’s laws to apply effectively online, we do not want to be subject to all other states’ laws. Europeans wanting their defamation laws to apply globally and Americans wanting their copyright regimes to provide global protection may, for good reasons, be reluctant to see Chinese, Russian, and Iranian restrictions on free speech apply globally.
In a decision handed down today, October 3, the Court of Justice of the European Union emphasized the need for courts to act “within the framework of the relevant international law” when ordering content blocking or removal with worldwide effect. The problem, however, is that the framework of the relevant international law is like the combination of a swiss and a blue cheese: it is full of holes and what is there stinks.
Given the diversity of values online, we must carefully avoid a race to the bottom where only content that is lawful all over the world is allowed online. Consequently, the geographical scope of blocking and removal decisions ought to be as geographically limited as possible apart from in a small number of cases such as when the content in question is obviously unlawful globally.
Diverse World, One Internet
The world consists of nearly 200 countries, some industrialized and some developing. All these states have their own history, economy, and culture. They have different social structures, political systems, and laws.
The people who populate these countries are of different ethnicities and speak various languages. They hold different values, religious beliefs, and political opinions. Indeed, even where they hold the same values, they frequently take different views on how those shared values should be balanced in specific cases where they clash with one another. This staggering diversity stands in contrast to the fact that we all – so far – essentially share one internet.
Given this background, there are few types of content that everyone will agree should be removed. This should not prevent us from working towards broad consensus. Initiatives such as the Christchurch Call and the important work of Paris-based Internet & Jurisdiction Policy Network will have a harmonizing effect over time. However, they cannot be treated as a quick fix but rather must be supported by a sustained “multistakeholder” effort.
Roles of Online Social Media
Online social platforms play a crucial role and can exercise strong influence. At the same time, their position is precarious because they are exposed to a wide range of uncoordinated national laws. Those regulations sometimes clash to the degree that to comply with one state’s laws, a platform is forced to violate another state’s laws. This must be minimized and where possible eliminated.
In the wake of major incidents of extremist materials circulating online, politicians routinely call on social media to do more. Those calls are not always grounded in the reality of technical limitations. And often, such calls are made despite the same politicians having failed to enact laws against the content in question. In such situations, it seems online social media is treated as a political scapegoat. This is unhelpful and obscures the legitimate calls for needed reform.
There are also cases where governments use social media platforms to force their values onto persons in other states. For example, Chinese-owned social media app TikTok now bans pro-LGBT content even in countries where homosexuality has never been illegal. Such actions have far-reaching consequences. At the minimum, it likely undermines the popularity of the social media at issue.
Steps Moving Forward
There is an urgent need to clarify the roles and responsibilities social media holds relating to content such as online extremism. We must strive towards models for international coordination and cooperation in which all relevant voices are heard. Clearer rules that take account of what is technically possible will benefit everyone.
In all this, we must realize that as governments divert responsibilities and decision making to the online platforms – effectively making them the internet’s gatekeepers – authorities are also transferring power to these platforms. This may undermine accountability, transparency, and ultimately, justice.