the digest  September 2020

The new crackdown on “extremist and terrorist content”: a risk to freedom of expression?

The threat posed by violent extremist and terrorist content online is real, and requires concerted action. But a wave of ongoing and proposed initiatives—at the Global Internet Forum to Counter Terrorism (GIFCT), the EU, and in various national contexts—has provoked scrutiny in recent months: with many academics and civil society organisation raising concerns over their impact on freedom of expression online. 

Since 2016, GIFCT—a tech industry consortium founded by Facebook, Microsoft, Twitter, and YouTube—has been building a shared database of “hashes” (unique digital “fingerprints”) to help platforms identify and quickly remove content deemed to be “extremist” or “terrorist”. As of last year, over 200,000 pieces of content had been included on the database. 

In a February paper for the Knight Foundation, Evelyn Douek argued that this initiative portends the rise of “unaccountable
content cartels”. And in July, a coalition of civil society groups criticised the GIFCT’s persistent lack of transparency, insufficient attention to human rights, and warned of the possibility of “extra-legal censorship”. This widely shared tweet thread by Daphne Keller at Stanford Cyber Policy Center offers an invaluable summary—and expansion—of the core critiques of GIFCT. 

Over at the EU, a new proposed regulation on preventing the dissemination of terrorist content online—and the more recent Digital Services Act (DSA) consultation—indicate a hardening stance on illegal content. Many civil society organisations have raised concerns about the usage of opaque machine-learning algorithms that this new regulation mandates.

As noted in our response to the EU’s DSA consultation, there are inherent risks to relying on automated tools as the legality of content depends on context. A video of a terrorist attack, for example, may be illegal glorification of terrorism in some circumstances, and legal academic research in another. The nuanced and context-specific nature of this determination requires human oversight, cooperation among platforms, and should not be left to automatic processes alone. A recent study published by the European Parliament on the impact of algorithms for online content filtering or moderation reinforces this point, highlighting how upload filters are prone to error. 

Finally, national level efforts—including the UK's upcoming voluntary code of practices on terrorist content, Australia's Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019, and a new Bill in New Zealand—pose similar concerns. To ensure freedom of expression is protected, these laws must include strong transparency provisions, and offer effective appeal and remedy procedures for users.

Other news:
  • GPD has submitted a contribution to the Forum on Information & Democracy Working Group on Infodemics. Read it here.
  • Last week, we co-hosted a panel at the Forum on Internet Freedom in Africa, exploring government approaches to disinformation in the region.

Cyber at the UN: a middle way?

From the very beginning of discussions about ICTs in the context of peace and security at the UN First Committee—back in 1999—one of the main sources of tension between states has been on whether or not a new treaty is needed for cyberspace, or whether existing international law suffices. 

This long-standing disagreement led, in 2018, to the establishment of two processes in the First Committee to discuss the same topic: responsible state behaviour in cyberspace. Member states in both processes continue to discuss (and disagree on) whether a new treaty is needed—with states associated with Russia and China strongly in support of it, and other states, including the US, against it.

Now, some member states, spearheaded by France, are proposing what they hope is a “middle way" out of this impasse—the “Programme of Action for Advancing Responsible State Behaviour in Cyberspace”—a new initiative unveiled at the United Nations Institute for Disarmament Research (UNIDIR)’s annual cyberstability conference last month. 

This approach would go beyond the current loose “Group of Governmental Experts (GGE) framework” (the application of international law in cyberspace, combined with a set of capacity building, confidence building measures and voluntary norms). It would embed a reporting mechanism to support compliance with existing agreements, like the GGE norms, and provide more concrete guidance to states on responsible behaviour in cyberspace—but would avoid the binding provisions associated with a treaty.

There are many details still to be agreed on this Programme of Action (PoA), but the conference provided some insight on what such a mechanism might consider—with many highlighting three key “building blocks”: national-level implementation, synergies with international cooperation mechanisms, and information exchange and transparency (e.g through reporting mechanisms and periodic review meetings). Encouragingly, the French Ambassador also referred to the importance of multistakeholder engagement if the PoA were to succeed—and the current version of the proposal includes some provisions for that, although they could be strengthened.

The PoA proposal, even if it garners sufficient member state support, wouldn't get passed this year. Instead, the proposers want the Open ended Working Group (OEWG) and the GGE to refer to it in their upcoming reports, and the next UNGA session in 2021 to adopt a resolution to set it up.

As a potential complicating factor, Russia has also indicated that it may propose a resolution in October to “streamline” the processes already underway: in other words, to initiate negotiations on a treaty. If this passes, it will likely make the upcoming First Committee negotiations even more fraught—and could lead to resolutions setting up parallel, competing processes yet again. One to watch…


Other news

  • The OEWG intersessionals on international law wrapped up at the end of September. We’ll have a rundown on how those went in next month’s Digest.

  • September saw the publication of an Options Paper on digital cooperation mechanisms, as part of the ongoing work around the UN Secretary General’s Roadmap for Digital Cooperation (which we discussed a few months back). It proposes the establishment of a new high-level multistakeholder advisory group (MAG) at the Internet Governance Forum (IGF). This new MAG would institutionalise information sharing between “deliberative bodies” that discuss digital policy issues (e.g. the IGF) and “decision making bodies” (e.g. the UNGA). The current MAG has set up a Working Group to facilitate discussions on these proposals, and set out a preliminary response to the proposals, including some ideas on how to operationalise it. 

Copyright © 2019 Global Partners Digital.
All rights reserved

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.

This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
Global Partners Digital · Second Home · 68 Hanbury St · London, E1 5JL · United Kingdom

Email Marketing Powered by Mailchimp